Human Networks and AI Tools Combine to Create Potent Propaganda Systems

Researchers are increasingly concerned with the blurring lines between authentic public engagement and manipulative online influence, prompting investigation into a novel phenomenon termed ‘cyborg propaganda’. Jonas R. Kunst from BI Norwegian Business School, Kinga Bierwiaczonek of the University of York, and Meeyoung Cha from the Max Planck Institute for Security and Privacy, working with colleagues including Omid V. Ebrahimi from Oxford University, Marc Fawcett-Atkinson from Canada’s National Observer, Asbjørn Følstad from SINTEF, Anton Gollwitzer from BI Norwegian Business School, Nils Köbis from University Duisburg-Essen, Gary Marcus from New York University, Jon Roozenbeek from University of Cambridge, and further collaborators at those institutions, demonstrate how this architecture, combining verified human actors with adaptive algorithmic automation, represents a significant evolution beyond simple bot networks. This collaborative study highlights the potential for such systems to operate within legal grey areas by leveraging citizen participation, while simultaneously raising critical questions about whether this technology empowers collective action or reduces individuals to mere extensions of centralised control. Understanding the implications of cyborg propaganda is vital, as it fundamentally reshapes digital political discourse from open debate to a contest of algorithmic campaigns.

Scientists term this architecture “cyborg propaganda,” combining verified humans with adaptive algorithmic automation to create a closed-loop system; AI tools monitor online sentiment to optimise directives and generate personalized content for users to post online. Cyborg propaganda exploits a critical legal shield by relying on verified citizens to ratify and disseminate messages, evading liability frameworks designed for automated botnets.

Researchers explore the collective action paradox of this technology, questioning whether it democratizes power by ‘unionizing’ influence or reduces citizens to ‘cognitive proxies’ of a central directive. They argue that cyborg propaganda fundamentally alters the digital public square, shifting political discourse from a democratic contest of individual ideas to a battle of algorithmic campaigns.

A push notification illuminates five thousand smartphones across the country, not as a breaking news alert, but as a directive from a partisan campaigning app requesting users to regain control of a slipping narrative on the tax bill. With two taps, users receive unique, AI-written captions tailored to their specific background and tone, posting these on their personal social media networks.

Within minutes, the topic trends, mimicking spontaneous yet converging public sentiment, but in reality, it is a calculated, synchronized strike. This phenomenon, rooted in platforms like ‘Act. IL’ (active until 2022) and current tools like Greenfly, SocialToaster, or GoLaxy, amplifies content to engineer viral trends. Greenly openly offers clients the ability to “synchronize an army of advocates to amplify your message. ” These platforms gamify advocacy by issuing missions to volunteers, incentivizing them to blast identical content or copy-paste directives.

While generating volume, these platforms occupy a gray zone between grassroots activism and ‘astroturfing’, masking orchestrated campaigns as grassroots movements. While the centralized coordination of decentralized actors is not entirely new, as seen in the paraphrasing tactics of the Chinese 50 cent party, generative AI fundamentally disrupts the behavioral economics and physics of digital coordination.

Historically, astroturfing traded scale for stealth, requiring rigid templates that created forensic fingerprints easily flagged by algorithms. By automating articulation and minimising human cognitive labor required to rephrase a central narrative, AI industrializes content creation and coordination at near-zero marginal cost. This transition enables a ‘multiplier effect,’ instantly generating thousands of unique message variations tailored to the profile and social background of each human proxy.

Unlike traditional offline coordination, such as supporters holding identical signs at a rally or participating in phone banking, this cyborg variation operates covertly. Because the messages appear to contain organic, individual thoughts, rather than retweets or shared links, the underlying coordination remains largely invisible to the audience. The result is content that seems like genuine human expression, bypassing systems designed to detect coordinated messaging.

Unlike political propaganda led by political elites, the underlying leadership and coordination of the messaging remains hidden from the public or content moderators. Researchers call this emerging dynamic ‘cyborg propaganda’: the synchronized and semi-automated dissemination of algorithmically generated articulations of narratives via a large number of verified human accounts.

It differs from fully-automated botnets and traditional astroturfing; here, identity is authentic, while articulation is synthetic. This fusion of authentic identity and synthetic articulation poses an epistemological challenge, questioning whether technology turns some citizens into puppets or empowers them to operate as a collective in the attention economy.

Researchers frame cyborg propaganda as a structural transformation of collective action ra as a distortion of the public sphere (users as ‘cognitive proxies’) versus a tool for ‘unionizing influence’ against algorithmic asymmetry relative to powerful elites. They outline a research agenda mapping the forensic signatures of this new frontier and address the resulting regulatory parado Understanding cyborg propaganda requires grasping communication mechanics that defy detection.

Historically, coordinated inauthentic behaviour online employed ‘bot farms’ or human-operated ‘troll farms,’ before shifting to autonomous coordinated AI bot swarms. The emerging frontier represents a qualitative shift in manipulation, involving coordinated authentic activity by verified human accounts with partially algorithmically stage-managed autonomy.

Cyborg propaganda thereby transcends astroturfing, bot amplification, and ‘connective action,’ hybridizing verified human identity with centralized, algorithmic articulation. Crucially, the human layer creates a unique regulatory shield; while authorities can ban automated botnets or foreign troll farms, regulating the speech of verified citizens, even when heavily coordinated, is far more comple Technically, cyborg propaganda operates via a synchronized workflow.

First, there is the organizer directive, a command centre app integrating with AI monitors flagging emerging narratives and public sentiment shifts. This enables data-informed strategic instructions. Automation extends to the directive layer, operatives activating ‘autopilot’ mode where AI identifies wedge issues or divisive rhetoric and draft directives with minimal human intervention.

Second, there is the AI multiplier, a generative engine scaling central directives into mass individualized content. Historically, coordinated campaigns betrayed automation fingerprints like templated or duplicated messaging, bursty timing, shallow account histories, and limited interactivity. Today, LLMs bypass these limitations by processing directives alongside user profiles, analysing history, syntax, and rhythms.

Substituting identical slogans with style transfer, the system generates unique variations ranging from academic-sounding arguments to casual complaints that mimic the users’ authentic voices, counterfeiting identity with high fidelity. To drive participation, the architecture may gamify or monetize activity. Personalization cloaks coordination from the user’s social circle, who are accustomed to their voice, while complicating platform detection dependent on clustered linguistic anomalies.

As verified users broadcast this content, they forge a coordinated consensus that evades filters and mimics organic linguistic diversity. This dynamic architecture can function as a closed-loop learning system. AI monitors track real-time reactions, enabling the hub to adjust directives against counter-narratives. High-engagement content is fed back to fine-tune subsequent messaging.

A critical byproduct is data poisoning, as synthetic activity permeates social media, embedding itself in future training corpora, skewing how mainstream AI amplifies dominant narratives. Cyborg propaganda offers substantial economic incentives. Traditional operations like troll farms demanded sustained investment in salaries, infrastructure, and oversight to maintain syn regulatory safety.

Unlike illegal botnets operating in the shadows, cyborg propaganda functions as a legitimate digital campaigning tool. Network analysis reveals coordinated inauthentic behaviour. A detailed examination of campaign infrastructure underpins the proposed forensic framework for detecting cyborg propaganda. Rather than focusing solely on individual account characteristics, a method easily circumvented by utilising authentic user profiles, the research prioritises network-level analysis to identify coordinated behaviour.

This approach acknowledges that cyborg agents frequently occupy central positions within online networks, acting as bridges between otherwise disconnected user groups. Tools are therefore designed to target accounts with high follower counts and significant network influence, specifically looking for instances where an account’s behaviour fluctuates between appearing human and automated.

To quantify unnatural coordination, researchers propose developing coordination indices that measure hyper-synchronicity in posting times and thematic clustering exceeding that expected from organic diffusion. Distinguishing between genuine viral trends, typically exhibiting logistic growth curves, and artificially amplified ‘cyborg trends’ with anomalous onset times forms a key research question.

Complementing this, supply-chain forensics investigates the commercial technologies enabling cyborg propaganda, utilising techniques like passive DNS to trace the web infrastructure of coordination hubs and identify client organisations. Crucially, the reliance on human recruitment presents a unique vulnerability. Researchers advocate for audit studies involving direct participation in these campaigns by signing up as volunteers, allowing for detailed documentation of the user experience and the psychological techniques employed to recruit and direct participants.

This is coupled with investigations into the psychological motivations of those involved, drawing on self-perception theory to explore whether posting extreme, AI-generated arguments leads to internalisation of those views, or conversely, reduces personal responsibility for expressed content. Longitudinal studies will assess whether reliance on AI degrades nuanced reasoning and fosters radicalisation through cognitive atrophy, alongside an economic analysis of why users exchange their voice for collective reach. a small number of automated accounts can be vastly more effective when coupled with the credibility and reach of numerous verified human users.

The difficulty lies in distinguishing between authentic grassroots movements and astroturf campaigns masquerading as such. Traditional methods of detecting manipulation rely on identifying patterns of inauthentic behaviour, but cyborg propaganda deliberately circumvents these safeguards by leveraging genuine accounts. This isn’t about replacing people with robots; it’s about augmenting human influence with artificial intelligence, creating a hybrid system that’s harder to detect and more difficult to counter.

The implications extend beyond political discourse. Any area where public opinion matters, from health advice to consumer choices, is potentially vulnerable to this form of manipulation. While the study rightly points to the need for new regulatory frameworks, the challenge is significant. Overly broad restrictions could stifle legitimate collective action, while narrowly tailored rules risk being easily evaded. Future research must focus on developing more sophisticated methods for identifying coordinated behaviour, perhaps by analysing linguistic patterns or network structures.

👉 More information
🗞 How cyborg propaganda reshapes collective action
🧠 ArXiv: https://arxiv.org/abs/2602.13088

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Efficient Quantum Code Compilation Simplifies Circuits and Reduces Error Rates

Efficient Quantum Code Compilation Simplifies Circuits and Reduces Error Rates

February 17, 2026
Wafer-Scale Package Hosts over 500 Qubits, Edging Closer to Practical Quantum Computers

Wafer-Scale Package Hosts over 500 Qubits, Edging Closer to Practical Quantum Computers

February 17, 2026
Ai’s Conflicting Goals Revealed by New Benchmark Assessing Safety, Values and Culture

Ai’s Conflicting Goals Revealed by New Benchmark Assessing Safety, Values and Culture

February 17, 2026