Network Formation Among Multi-LLMs

Marios Papachristou of Arizona State University and Cornell University, along with Yuan Yuan of UC Davis, present a framework to study network formation behaviors among multiple large language model (LLM) agents, benchmarking them against human decision-making. Published in PNAS Nexus on December 2, 2025, this work investigates whether LLM interactions replicate human network dynamics across synthetic and real-world settings—including friendship, telecommunication, and employment networks. The research demonstrates that LLMs reproduce core microlevel principles—preferential attachment, triadic closure, and homophily—and macrolevel properties like community structure and small-world effects, adapting emphasis based on context and mirroring human patterns of social mobility.

Network Formation and LLM Dynamics

This research introduces a framework to study how multiple LLM agents form networks and compares those behaviors to human decision-making. The study analyzes both microlevel principles – preferential attachment, triadic closure, and homophily – and macrolevel properties like community structure and small-world effects. Findings demonstrate LLMs reproduce these core principles, suggesting their potential as tools for social simulation and generating synthetic data, but also raising concerns about bias in AI systems interacting with human networks.

LLMs exhibit adaptability in network formation based on context. In synthetic settings, they consistently demonstrate human-like behaviors – preferential attachment, homophily, and triadic closure – leading to emergent community structures and small-world patterns. However, in real-world scenarios, LLMs adjust their strategies; emphasizing homophily and triadic closure in friendship networks, homophily and preferential attachment in telecommunications, and reflecting social mobility in company networks.

The study highlights LLMs’ potential as a novel approach to agent-based modeling, enabling realistic and flexible simulations without relying on pre-defined rules. Furthermore, LLMs can generate synthetic datasets that replicate key social network properties while safeguarding privacy – especially valuable in restricted domains like healthcare or organizational networks. This positions LLMs as powerful tools for understanding and simulating social systems, with implications for business, governance, and technology design.

LLM Integration into Real-Life Applications

This research investigates how multiple large language models (LLMs) form networks, mirroring human social dynamics. The study examines both microlevel principles—preferential attachment, triadic closure, and homophily—and macrolevel properties like community structure and small-world effects. Through simulations, researchers found LLMs consistently exhibit human-like behaviors in synthetic settings, suggesting their potential for social simulation and the generation of realistic data for network science applications.

LLMs demonstrate adaptability in network formation, adjusting their strategies based on context. In friendship networks, they prioritize homophily and triadic closure, while in telecommunications, homophily and preferential attachment are favored. Notably, in company networks, LLMs simulate employees connecting to managers, mirroring human patterns of social mobility. This context-specific behavior highlights a sophisticated level of network understanding within these AI models.

The findings position LLMs as valuable tools for agent-based modeling and synthetic data generation. Researchers suggest LLMs can provide realistic simulations without needing pre-defined rules, allowing for testing interventions in silico. Furthermore, LLMs can create synthetic datasets that replicate key social network properties while protecting privacy – especially useful in sensitive areas like healthcare or organizational networks where data access is limited.

Applying Social Science to LLM Study

This research applies social science methodologies to study large language models (LLMs), including laboratory experiments and agent-based modeling. The core question addresses whether LLM interactions mirror human network dynamics, specifically focusing on network formation. Researchers examined if LLMs replicate key principles like preferential attachment, triadic closure, and homophily – elements that shape human social networks – and macrolevel properties such as community structure and small-world effects.

The study demonstrates LLMs consistently exhibit human-like behaviors in synthetic settings, including preferential attachment, homophily, and triadic closure, ultimately leading to emergent community structures and small-world patterns. Importantly, LLM preferences adapt to real-world contexts; emphasizing homophily in friendship networks, but demonstrating preferential attachment in telecommunications and social mobility in organizational networks, mirroring human behaviors.

These findings position LLMs as powerful tools for social simulation and synthetic data generation. This approach allows for realistic and flexible simulations without hard-coded rules, enabling testing of interventions in silico before real-world deployment. Furthermore, LLMs can generate synthetic datasets replicating key social network properties while preserving privacy, valuable in restricted-access domains like healthcare or organizational networks.

LLM Network Formation Research Focus

This research focuses on understanding how multiple large language models (LLMs) form networks and whether those interactions resemble human social dynamics. The study analyzes both microlevel principles – preferential attachment, triadic closure, and homophily – and macrolevel properties like community structure and small-world effects. Researchers simulated LLM agents interacting to examine decision-making regarding network connections, ultimately benchmarking LLM behavior against human decisions in various network settings.

The study found LLMs consistently exhibit human-like behaviors in synthetic environments, including preferential attachment, homophily, and triadic closure, leading to emergent community structures and small-world patterns. However, LLMs adapt their preferences based on context; emphasizing homophily in friendship networks, but favoring preferential attachment in telecommunication networks, and mirroring social mobility patterns in organizational networks. This adaptability mirrors human context-specific behavior.

These findings position LLMs as valuable tools for agent-based modeling and synthetic data generation. LLMs offer realistic, flexible simulations without hard-coded heuristics, allowing for in-silico testing of interventions. Furthermore, they can generate synthetic datasets replicating key social network properties while preserving privacy—particularly useful in restricted-access domains like healthcare or organizational networks.

Microlevel Principles of Network Formation

This research examines how multiple large language models (LLMs) form networks, focusing on whether their behavior mirrors human social dynamics. The study analyzes both microlevel principles – preferential attachment, triadic closure, and homophily – and macrolevel properties like community structure and small-world effects. Researchers simulated LLM agents interacting to determine if these fundamental network formation principles emerge, offering insights into their potential for social simulation and synthetic data generation.

LLMs consistently demonstrate human-like network behaviors in synthetic settings, specifically exhibiting preferential attachment, homophily, and triadic closure. Importantly, LLMs adapt their strategies to different contexts; in friendship networks, they prioritize homophily and triadic closure, while in organizational settings, they exhibit patterns resembling human social mobility by preferentially connecting to managers. This contextual adaptation highlights their sophistication beyond simply mimicking basic principles.

The findings suggest LLMs are valuable tools for agent-based modeling, enabling realistic social simulations without needing pre-programmed rules. Furthermore, LLMs can generate synthetic datasets that accurately replicate key social network properties while protecting privacy, which is particularly useful in sensitive domains like healthcare or organizational networks where data access is limited. This positions LLMs as potentially impactful for both theoretical advancements and practical applications.

Preferential Attachment in LLM Networks

The study examined whether large language models (LLMs) replicate core principles of human network formation. Researchers found LLMs consistently exhibit behaviors like preferential attachment, triadic closure, and homophily in synthetic settings. These microlevel principles led to emergent macrolevel properties such as community structure and small-world patterns. This suggests LLMs can not only mimic network formation but also generate realistic network dynamics similar to human social interactions.

LLMs demonstrated an ability to adapt their networking preferences based on context. In friendship networks, LLMs prioritized homophily and triadic closure. However, in telecommunication networks, both homophily and preferential attachment were dominant. Notably, within company networks, LLMs modeled employees preferentially connecting to managers, mirroring patterns of human social mobility and organizational structure.

The findings suggest LLMs offer a novel approach to agent-based modeling, enabling realistic and flexible simulations without needing pre-programmed rules. This allows for testing interventions in silico before real-world deployment. Furthermore, LLMs can generate synthetic datasets that replicate key social network properties while maintaining privacy, which is valuable in restricted data domains like healthcare or organizational networks.

Triadic Closure and LLM Behavior

This research explored whether large language models (LLMs) replicate principles found in human social networks. The study analyzed both microlevel principles – preferential attachment, triadic closure, and homophily – and macrolevel properties like community structure and small-world effects. Researchers simulated LLM agents interacting to examine their network connection decisions, finding consistent exhibition of human-like behaviors in synthetic settings. This suggests LLMs can be utilized to model and understand complex network dynamics.

LLMs demonstrated adaptive behaviors across different network contexts. In friendship networks, the models prioritized both homophily and triadic closure. However, in telecommunication networks, homophily and preferential attachment were dominant. Organizational networks saw LLMs connecting employees to managers, mirroring human patterns of social mobility. This ability to adjust strategies highlights a sophisticated level of network formation mirroring human flexibility.

The findings have implications for agent-based modeling and synthetic data generation. LLMs can provide realistic, flexible simulations without needing pre-programmed rules. They also offer a way to create synthetic datasets replicating key social network properties while protecting privacy, which is valuable in fields like healthcare and organizational networks where data access is limited. This positions LLMs as powerful tools for simulating and shaping social systems.

Homophily in LLM Network Connections

The study examined if large language models (LLMs) replicate principles found in human social networks. Researchers analyzed LLM interactions, focusing on microlevel principles like preferential attachment, triadic closure, and homophily, as well as macrolevel properties like community structure and small-world effects. Results demonstrated LLMs consistently exhibit human-like behaviors in synthetic settings, suggesting potential for realistic social simulations without relying on pre-programmed rules.

LLMs demonstrated adaptability in different network contexts. In friendship networks, the models prioritized homophily and triadic closure. However, in telecommunications networks, homophily and preferential attachment were dominant, while company networks showed a preference for employees connecting with managers, mirroring patterns of human social mobility. This context-specific behavior highlights the LLMs’ ability to adjust strategies across different environments.

The research indicates LLMs can be valuable tools for agent-based modeling and synthetic data generation. They offer a way to create realistic simulations and datasets that replicate key social network properties while potentially preserving privacy, which is crucial in fields like healthcare or organizational networks. The study positions LLMs as powerful resources for understanding and shaping social systems, with applications in theory advancement and practical technologies.

Macrolevel Properties of LLM Networks

This research examines the network formation behaviors of multiple large language models (LLMs), specifically investigating if they mirror principles found in human social networks. The study focuses on microlevel principles – preferential attachment, triadic closure, and homophily – as well as macrolevel properties like community structure and small-world effects. Researchers simulated LLM agents interacting to determine if these network characteristics emerge, providing insights into their potential for social simulation and synthetic data generation.

The findings demonstrate that LLMs consistently exhibit human-like behaviors in synthetic settings, including preferential attachment, homophily, and triadic closure, ultimately leading to the development of community structures and small-world patterns. Importantly, LLMs adapt their strategies based on context; they prioritize homophily and triadic closure in friendship networks, homophily and preferential attachment in telecommunication networks, and connections to managers in company networks, mirroring human social mobility patterns.

These results highlight LLMs’ potential as tools for agent-based modeling, offering realistic simulations without requiring pre-programmed rules. This capability could allow for testing interventions before real-world deployment. Additionally, LLMs can generate synthetic datasets that replicate key social network properties while maintaining privacy, valuable in fields like organizational or healthcare networks where data access is limited, opening opportunities for advancements in business, governance, and technology.

Community Structure in LLM Simulations

This research examines how multiple large language models (LLMs) form networks, mirroring principles found in human social connections. The study analyzes both microlevel principles—preferential attachment, triadic closure, and homophily—and macrolevel properties like community structure and small-world effects. Findings demonstrate LLMs consistently exhibit human-like behaviors in synthetic settings, building networks with these characteristics. This suggests LLMs can replicate complex network dynamics, opening possibilities for applications in network science and social sciences.

LLMs demonstrate adaptability in network formation based on context. In friendship networks, they prioritize homophily and triadic closure. However, in telecommunication and company networks, preferences shift – favoring homophily and preferential attachment in the former, and demonstrating a tendency for employees to connect with managers in the latter, mirroring patterns of human social mobility. This contextual adaptation highlights the sophistication of LLM network behavior.

The study suggests LLMs are valuable tools for agent-based modeling, enabling realistic social simulations without needing pre-programmed rules. Furthermore, LLMs can generate synthetic datasets that replicate key social network properties while protecting privacy—particularly useful in areas like healthcare or organizational networks where data access is limited. This positions LLMs as offering opportunities for advancing theory and practical applications in areas like business and technology.

Small-World Effects in LLM Networks

This study examined whether large language models (LLMs) replicate principles found in human social networks. Researchers simulated interactions between multiple LLM agents to analyze their network formation behaviors. The investigation focused on microlevel principles – preferential attachment, triadic closure, and homophily – as well as macrolevel properties like community structure and small-world effects. Findings demonstrate LLMs consistently exhibit human-like behaviors in synthetic settings, creating networks with these characteristics.

The research revealed LLMs adapt their network formation strategies depending on the context. In friendship networks, LLMs prioritized homophily and triadic closure. However, in telecommunication networks, homophily and preferential attachment were dominant. Within company networks, LLMs modeled employees connecting to managers, reflecting human patterns of social mobility. This contextual adaptation mirrors human behavior, showcasing a sophisticated level of network understanding.

The results position LLMs as useful tools for agent-based modeling and synthetic data generation. Specifically, LLMs offer realistic simulations without needing pre-defined rules. This capability allows for testing interventions in silico before real-world deployment. Furthermore, LLMs can create synthetic datasets that replicate key social network properties while protecting privacy, valuable in sensitive fields like healthcare or organizational networks.

LLM Adaptability Across Network Contexts

This research investigates how multiple large language models (LLMs) form networks, mirroring human social dynamics. The study focuses on key principles like preferential attachment, triadic closure, and homophily at the micro-level, and community structure & small-world effects at the macro-level. Researchers simulated LLM agents interacting to determine if emergent network properties would align with observed human network behaviors, offering a new approach to agent-based modeling and social simulation.

LLMs demonstrated adaptability in network formation based on context. In synthetic settings, they consistently displayed human-like behaviors, including preferential attachment, homophily, and triadic closure, creating emergent community structures and small-world patterns. However, in real-world scenarios – friendship, telecommunications, and company networks – LLMs adjusted their strategies, emphasizing homophily in friendship networks and mirroring human social mobility in organizational settings.

These findings highlight LLMs’ potential as tools for social simulation and synthetic data generation. Specifically, LLMs can create realistic network datasets while preserving privacy, valuable in fields like healthcare or organizational studies where data access is limited. The research suggests LLMs offer a flexible approach to agent-based modeling, removing the need for pre-defined rules and enabling testing of interventions in silico before real-world implementation.

LLMs as Agent-Based Modeling Tools

This research introduces a framework to study how multiple LLM agents form networks and benchmarks those behaviors against human decisions. The study analyzes micro-level principles like preferential attachment, triadic closure, and homophily, alongside macro-level properties such as community structure and small-world effects. Findings demonstrate LLMs reproduce these core principles, suggesting their potential as tools for social simulation and generating synthetic data, while also highlighting potential risks related to bias and fairness in AI systems interacting with human networks.

LLMs demonstrate adaptability in network formation, mirroring human context-specific behavior. In synthetic settings, they consistently exhibit preferential attachment, homophily, and triadic closure, leading to community structures and small-world patterns. However, in real-world scenarios—friendship, telecommunication, and company networks—LLM preferences adjust; for example, emphasizing homophily in friendships but displaying patterns of social mobility in organizational settings.

The study positions LLMs as a novel approach to agent-based modeling, offering realistic and flexible simulations without requiring hard-coded heuristics. This capability allows for in silico testing of interventions before real-world deployment, offering value to managers, policymakers, and researchers. Furthermore, LLMs can generate synthetic datasets that replicate key social network properties while preserving privacy, particularly useful in sensitive domains like healthcare or organizational networks.

Synthetic Data Generation with LLMs

This research introduces a framework to study network formation behaviors of multiple LLM agents, benchmarking them against human decisions. Across both synthetic and real-world network settings—including friendship, telecommunication, and employment—LLMs reproduce core microlevel principles like preferential attachment, triadic closure, and homophily, as well as macrolevel properties like community structure and small-world effects. This demonstrates LLMs’ capacity to mimic human network dynamics.

LLMs adapt their emphasis on these principles depending on context; for instance, prioritizing homophily in friendship networks but heterophily in organizational settings, mirroring patterns of social mobility observed in humans. In synthetic environments, LLMs consistently exhibit human-like behaviors, while in real-world contexts, their preferences adjust—favoring homophily and triadic closure in friendships, homophily and preferential attachment in telecommunications, and manager-employee connections in companies.

These findings suggest LLMs can serve as powerful tools for agent-based modeling and synthetic data generation. Specifically, LLMs offer a novel approach to realistic and flexible simulations, and can generate synthetic datasets replicating key social network properties while preserving privacy—valuable in areas like healthcare or organizational networks where data access is restricted. This positions LLMs for applications in business, governance, and technology design.

Implications for Social Systems and AI Design

This research explores how large language models (LLMs) form networks, mirroring human social dynamics. The study demonstrates LLMs consistently exhibit principles like preferential attachment, triadic closure, and homophily in synthetic settings, leading to emergent community structures and small-world patterns. Importantly, LLMs adapt these preferences to different contexts—emphasizing homophily in friendship networks, but shifting to preferential attachment in telecommunications and reflecting social mobility in company networks, mirroring human behavior.

LLMs offer a new approach to agent-based modeling, providing realistic simulations without relying on pre-programmed rules. This capability allows for in silico testing of interventions before real-world implementation, potentially benefiting managers, policymakers, and researchers. Furthermore, LLMs can generate synthetic datasets replicating key social network properties while preserving privacy, particularly valuable in restricted-access domains like healthcare or organizational networks.

The findings highlight LLMs’ potential as tools for understanding and simulating social systems. This offers opportunities to advance theoretical understanding and build practical applications in technology design, governance, and business. Specifically, the ability of LLMs to replicate network formation principles suggests they can be used to test interventions and generate realistic, privacy-preserving data for social science research.

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

Toyota & ORCA Achieve 80% Compute Time Reduction Using Quantum Reservoir Computing

January 14, 2026
GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

GlobalFoundries Acquires Synopsys’ Processor IP to Accelerate Physical AI

January 14, 2026
Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

Fujitsu & Toyota Systems Accelerate Automotive Design 20x with Quantum-Inspired AI

January 14, 2026