Synthetic aperture radar (SAR) imagery, due to its unique electromagnetic scattering characteristics, often presents challenges in automatic target recognition. Yiming Zhang, Weibo Qin, and Yuntian Liu, from Fudan University, alongside Feng Wang, address the vulnerability of current neural network-based SAR automatic target recognition (SAR-ATR) systems to adversarial examples. Their research highlights a tendency for these systems to focus excessively on background information, reducing their robustness against malicious manipulation. This team introduces a new attack method, termed Space-Reweighted Adversarial Warping (SRAW), which subtly deforms images to create adversarial examples, balancing the need for effective disruption with maintaining a low level of visual distortion. The development of SRAW represents a significant step towards improving the security and reliability of SAR-ATR systems, demonstrably outperforming existing attack methods in both stealth and the ability to transfer attacks between models.
However, these systems are vulnerable to adversarial examples, carefully crafted inputs designed to cause misclassification. Research has focused on adapting established optical image attack methods and developing new approaches that account for the specific characteristics of SAR imagery, such as its microwave-based imaging and grayscale representation. A key challenge remains designing attacks that are both effective and suited to the unique properties of SAR data.
This study investigates adversarial attack mechanisms in the SAR domain, focusing on the impact of background clutter and the need for perceptible perturbations. Researchers observed that SAR imagery exhibits strong background correlation, meaning non-target areas significantly influence prediction accuracy. This is exacerbated when optical image-trained deep neural networks are applied to SAR data, causing the model to over-rely on contextual cues instead of target-specific features. Given the inherent information sparsity in SAR data, achieving successful attacks often requires more noticeable alterations to the image than in optical imagery. To address this, the team explored heterogeneous perturbation budgets, strategically allocating different levels of modification to background and target regions, aiming to maximise attack effectiveness while minimising the overall perceptibility of the adversarial example. This research addresses the vulnerability of deep neural networks (DNNs) used in SAR-ATR systems to adversarial examples, which often require noticeable distortions to be effective. The team measured the performance of SRAW against state-of-the-art DNNs, including DenseNet, VGG, ResNet, ResNext, and PyramidNet, utilising the widely adopted MSTAR dataset. Experiments revealed that SRAW consistently outperforms existing methods in both imperceptibility and adversarial transferability, demonstrating a substantial advancement in attack sophistication.
Results demonstrate that SRAW achieves an average Attack Success Rate (ASR) of 78.35% across all tested models, surpassing the performance of FGSM, MI-FGSM, PGD, CW, LoRa-PGD, and DeCoWa. The study quantified attack imperceptibility using Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS) metrics. Measurements confirm that SRAW attains a SSIM of 0.5724 and a LPIPS of 0.2930, indicating superior structural preservation and perceptual invisibility. Further tests prove the method’s transferability by using one model to generate attacks and evaluating their effectiveness on others, resulting in an average accuracy drop of 73.33% across all target models. The work establishes a mathematical framework, allowing for optimized spatial deformation with reweighted budgets across foreground and background regions, effectively generating subtle yet potent adversarial examples. The authors demonstrate that the inherent information sparsity within SAR imagery contributes to vulnerabilities in deep neural networks used for target recognition, causing them to disproportionately rely on background features. SRAW addresses this by optimising spatial deformation of images, carefully balancing the modification of foreground and background regions to create subtle, yet effective, adversarial perturbations. Extensive experimentation confirms that SRAW outperforms existing attack methods in both effectiveness and imperceptibility, while also demonstrating a capacity for transferability across different SAR-ATR models. The generated adversarial examples achieve a favourable balance between structural preservation and perceptual invisibility, suggesting a stealthier approach compared to previous techniques. The authors acknowledge a limitation in not yet exploring the practical implementation of SRAW within the full SAR signal processing pipeline, with future research planned to investigate the physical feasibility of this method by integrating it directly into the SAR data acquisition and processing stages.
👉 More information
🗞 SRAW-Attack: Space-Reweighted Adversarial Warping Attack for SAR Target Recognition
🧠 ArXiv: https://arxiv.org/abs/2601.10324
