Face Morphs Pose Threat to Recognition Systems

Face morphing attacks represent a significant and growing threat to the security of face recognition systems employed in modern electronic identity documents. Nicolò Di Domenico, Annalisa Franco, and Matteo Ferrara from the Department of Computer Science and Engineering, University of Bologna, Italy, alongside Davide Maltoni, present a new face morphing technique leveraging Arc2Face, an identity-conditioned face foundation model. This research is particularly noteworthy as it demonstrates a method for generating photorealistic facial images while effectively preserving crucial identity information, a key vulnerability in current passport enrollment procedures that often lack robust live capture verification. Through rigorous testing on established and newly created datasets, including FEI and ONOT, the team’s approach achieves a comparable morphing attack potential to traditional landmark-based techniques, historically considered the most difficult to detect, highlighting a substantial advancement in understanding and mitigating this critical security risk.

Until recently, security flaws in digital identity checks relied on detectable distortions in altered facial images. Now, a new technique generates remarkably realistic morphed faces that bypass existing detection systems, presenting a serious challenge to current passport and facial recognition technologies and demanding more sophisticated security measures.

Researchers often acquire face morphs without a supervised live capture process. In this paper, they propose a novel face morphing technique based on Arc2Face, an identity-conditioned face foundation model capable of synthesising photorealistic facial images from compact identity representations. This approach demonstrates effectiveness by comparing the morphing attack potential metric on two large-scale sequestered face morphing attack detection datasets against several state-of-the-art morphing methods.

Scientists are increasingly concerned with face morphing attacks as a significant threat to Face Recognition Systems (FRSs) used in electronic identity documents. These attacks exploit weaknesses in enrollment procedures where facial images are often acquired without supervised live capture. Two individuals may collaborate to create a single morphed facial image combining identity features from both subjects, potentially deceiving human officers and resulting in a double-identity image being stored within the document’s chip.

The severity of this attack stems from the blended nature of the morphed image, allowing it to successfully match against both contributing subjects, potentially enabling two individuals to authenticate as a single identity holder. Successful attacks require the morphed image to convincingly deceive both a human examiner and the FRS used for automatic identity verification. This project received funding from the European Union’s Horizon Europe research and innovation program under Grant Agreement No.

Arc2Face synthesis replicates landmark-based morphing attack performance and preserves identity

Initial experiments reveal a morphing attack potential comparable to landmark-based techniques, traditionally considered the most challenging to defeat. Specifically, the Arc2Face-based method achieves performance levels matching those of established landmark-based approaches across multiple datasets. Researchers conducted evaluations using two large-scale sequestered face morphing attack detection datasets, alongside two newly created morphed face datasets derived from the FEI and ONOT collections.

These datasets provided a rigorous testing ground for assessing the effectiveness of the new technique against existing state-of-the-art methods. Detailed analysis of the results demonstrates the ability to effectively preserve and manage identity information during morph generation. By leveraging Arc2Face, the research successfully synthesised photorealistic facial images from compact identity representations.

Once morphs were generated, they assessed their attack potential, confirming a high degree of realism and the capacity to deceive face recognition systems. The method’s ability to blend facial features while maintaining recognisable characteristics of both contributing individuals is at the core of this success. The newly generated datasets, derived from FEI and ONOT, proved instrumental in providing a broader evaluation scope and were made publicly available to encourage further research and benchmarking.

Comparisons against existing deep learning-based morphing techniques consistently showed improvements in realism and identity preservation, and the work outperforms other state-of-the-art deep learning-based morphing methods in terms of attack potential. This approach consistently generates images that successfully bypass face recognition systems, measuring its ability to produce morphed images that could be falsely accepted as belonging to either of the original individuals.

By explicitly controlling pose and background characteristics, the research ensured the generation of ISO/ICAO compliant images suitable for realistic enrollment scenarios. Under these conditions, the generated morphs exhibited a high degree of visual fidelity and were able to effectively deceive both human examiners and automated face verification systems.

Foundation models generate undetectable face morphs challenging biometric security

Scientists are increasingly concerned with the subtle ways artificial intelligence can be deceived, and this research highlights a particularly troubling vulnerability in modern security systems. While much attention focuses on sophisticated deepfakes, a more immediate threat comes from face morphs, blended images created from two individuals, which can bypass biometric checks.

Existing methods to detect these attacks have largely focused on identifying the telltale signs of how the morph was created, specifically landmark-based techniques. However, this work demonstrates a new approach, utilising advanced foundation models, that can generate remarkably convincing morphs without leaving those same detectable traces. Generating high-quality morphs that evade detection, once considered a difficult problem, now relies on the power of AI models trained on vast datasets of faces, making the creation of deceptive morphs easier and more accessible.

Beyond the technical achievement, this is a worrying development because it suggests a shift in the arms race between attackers and defenders. Malicious actors could soon use readily available tools to create effective identity forgeries, reducing the need for specialised knowledge of image processing. Although the generated morphs achieve comparable attack potential to landmark-based methods, they are not necessarily better at fooling systems.

Also, the evaluation relies on existing morphing attack detection datasets, which may not fully capture the complexities of real-world scenarios. The research does not yet address how to defend against these new, more subtle attacks, but the implications are clear. Future research must move beyond simply detecting how a morph was made and focus on verifying the authenticity of the presented face itself.

This could involve incorporating liveness detection techniques, analysing subtle physiological signals, or developing more advanced methods for assessing image quality. A broader discussion is needed about the security of identity verification systems and the need for more strong safeguards against increasingly sophisticated forms of fraud.

👉 More information
🗞 Arc2Morph: Identity-Preserving Facial Morphing with Arc2Face
🧠 ArXiv: https://arxiv.org/abs/2602.16569

Rohail T.

Rohail T.

As a quantum scientist exploring the frontiers of physics and technology. My work focuses on uncovering how quantum mechanics, computing, and emerging technologies are transforming our understanding of reality. I share research-driven insights that make complex ideas in quantum science clear, engaging, and relevant to the modern world.

Latest Posts by Rohail T.:

Quantum Error Correction Gains a Clearer Building Mechanism for Robust Codes

Quantum Error Correction Gains a Clearer Building Mechanism for Robust Codes

March 10, 2026

Protected: Models Achieve Reliable Accuracy and Exploit Atomic Interactions Efficiently

March 3, 2026

Protected: Quantum Computing Tackles Fluid Dynamics with a New, Flexible Algorithm

March 3, 2026