Trustworthy GenAI over 6G: Frameworks Address Vulnerabilities in Integrated Systems and Evolving Adversarial Agents

The convergence of generative artificial intelligence and sixth-generation (6G) networks presents both exciting opportunities and critical security challenges, and a new study addresses these vulnerabilities head-on. Bui Duc Son from Sungkyunkwan University, Trinh Van Chien from Hanoi University of Science and Technology, and Dong In Kim from Sungkyunkwan University, alongside their colleagues, investigate how the integration of technologies like digital twins and large telecommunication models introduces new avenues for attack within future networks. Their work reveals that compromised systems can manipulate both the physical operation and the intelligent reasoning capabilities of 6G infrastructure, demanding a proactive approach to security. The researchers propose an adaptive evolutionary defense concept, leveraging generative AI itself to simulate attacks and continuously refine protective measures, ultimately paving the way for trustworthy and resilient GenAI-enabled 6G networks.

GenAI Security Risks in Emerging 6G Networks

Researchers investigated the increasing security risks associated with integrating generative artificial intelligence into future 6G wireless networks. The convergence of sensing, learning, generation, and reasoning creates new vulnerabilities, rendering traditional security approaches insufficient. Key risks include an expanded attack surface, data poisoning, model stealing, adversarial attacks, privacy concerns, and the potential for cascading failures across network components. The study presents a comprehensive analysis of threats targeting integrated sensing and communication, federated learning, digital twins, and large telecommunication models.

Scientists engineered scenarios involving compromised digital twins and models manipulating both physical and cognitive aspects of 6G systems. They developed techniques to perturb sensing signals with crafted waveforms, biasing perception and decision-making in real time, and demonstrated vulnerabilities through replay and signal forgery attacks. To assess threats to federated learning and diffusion models, researchers implemented label-flipping and model poisoning attacks, shifting decision boundaries and embedding backdoors within training data. They also explored how perturbations to diffusion models can generate poisoned synthetic data, degrading subsequent training. Investigations into large telecommunication models revealed vulnerabilities to prompt injection, jailbreak attacks, and data poisoning during training. This work demonstrates how these cross-layer attacks can reinforce one another, eroding data quality and model trustworthiness, and underscores the need for end-to-end defenses across the wireless communications ecosystem.
G Network Attacks Across Sensing, Learning Layers

Researchers investigated vulnerabilities in future 6G networks integrating generative artificial intelligence, focusing on how attacks can propagate across sensing, communication, learning, and cognitive layers. To simulate realistic attacks, scientists engineered scenarios involving compromised digital twins and models manipulating both physical and cognitive aspects of 6G systems. The team developed techniques to perturb sensing signals by injecting specially crafted waveforms, utilizing algorithms designed for power efficiency. Researchers also explored replay and signal forgery attacks, recording and retransmitting outdated signals to desynchronize system perception from the actual environment.

To assess threats to federated learning and diffusion models, scientists implemented label-flipping attacks, shifting decision boundaries under non-IID data distributions, and model poisoning attacks, crafting gradients that bypass detection mechanisms and potentially embedding backdoors activated by specific triggers. To examine vulnerabilities in large telecommunication models, the team developed prompt injection and jailbreak attacks, embedding malicious instructions within sensor metadata and control messages to override safety policies, and data poisoning attacks during training, introducing corrupted data to shift model boundaries or implant backdoors activated by specific tokens. This work demonstrates how these cross-layer attacks can reinforce one another, eroding data quality and model trustworthiness, and underscores the need for end-to-end defenses across the wireless communications ecosystem.

Adaptive AI Defenses Against Evolving Attacks

This work demonstrates a novel adaptive evolutionary defense concept for securing generative artificial intelligence integrated into future 6G networks against evolving adversarial attacks. Researchers focused on vulnerabilities arising from multimodal data processing and autonomous reasoning within integrated sensing and communication, federated learning, digital twins, diffusion models, and large telecommunication models. Experiments using a language model for antenna prediction revealed that these wireless predictors are highly sensitive to adversarial perturbations, particularly when attackers adapt their strategies over time. While an attacked model experienced a clear decline in accuracy, a model equipped with the adaptive defense converged more slowly during training but ultimately maintained stable performance.

The concept involves continuous co-evolution with attacks through AI-driven simulation and feedback, combining physical-layer protection, secure learning pipelines, and cognitive-layer resilience. Researchers validated the effectiveness of the defense through iterative feedback and co-evolution with the attacker, confirming its ability to maintain robustness in dynamic attack environments. The study highlights the necessity of adaptive, co-evolving defense mechanisms for securing AI-native 6G systems, as proactive defenses lack flexibility when facing evolving threats. Further analysis identified challenges, including the need for standardization, large-scale testbeds, and addressing privacy concerns related to multimodal data integration. The team also emphasized the potential impact of quantum computing on both attack and defense strategies.

GenAI Vulnerabilities and Co-Evolving Defenses

This work details emerging security vulnerabilities in future 6G networks that increasingly integrate generative artificial intelligence. Researchers identified potential attack vectors across several key components, including integrated sensing and communication, federated learning, digital twins, and large telecommunication models, demonstrating how compromises in one area can cascade and affect others. The team proposes an adaptive evolutionary defense concept, which uses AI-driven simulation to continuously co-evolve with potential attacks, combining protections at the physical layer with safeguards for the AI components themselves. A case study involving a language model used for antenna prediction confirmed the susceptibility of these systems to adversarial manipulation and demonstrated the effectiveness of the proposed defense strategy.

The findings highlight the need for a system-level approach to security in these complex networks, emphasizing co-design of security and performance alongside lightweight, scalable protection mechanisms. Researchers acknowledge that realizing truly resilient 6G networks requires ongoing work in areas such as trust verification and privacy-preserving designs for pervasive sensing and digital twins. Future research should also explore the potential of quantum-assisted training and optimization to accelerate security measures and counter emerging quantum-accelerated attacks, ultimately building more robust and adaptive wireless communication systems.

👉 More information
🗞 Trustworthy GenAI over 6G: Integrated Applications and Security Frameworks
🧠 ArXiv: https://arxiv.org/abs/2511.15206

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Llm Test Generation Achieves 20.92% Coverage with Newer Large Language Models

Llm Test Generation Achieves 20.92% Coverage with Newer Large Language Models

January 21, 2026
Machine Learning Achieves Accurate Prediction of Hubble ACS/SBC Background Variation Using 23 Years of Data

Machine Learning Achieves Accurate Prediction of Hubble ACS/SBC Background Variation Using 23 Years of Data

January 21, 2026
AI Job Anxiety Confirmed in 25 Computer Science Students, Driving Adaptive Strategies

AI Job Anxiety Confirmed in 25 Computer Science Students, Driving Adaptive Strategies

January 20, 2026