This page features comprehensive surveys and review articles that map the evolving landscape of secure and resilient AI. These resources offer valuable overviews of existing methodologies, systematized taxonomies, and identified research gaps to guide further investigation and development.
Title | Publisher | Year | Description | Link |
---|---|---|---|---|
An Introduction to Adversarially Robust Deep Learning | IEEE Transactions on Pattern Analysis and Machine Intelligence | 2024 | This work presents a comprehensive survey of adversarial machine learning since 2013, covering key attack and defense strategies, taxonomies, and theoretical insights into adversarial robustness, fragility, and certification. | IEEE Explore |
The Impact of Adversarial Attacks on Federated Learning: A Survey | IEEE Transactions on Pattern Analysis and Machine Intelligence | 2024 | This paper presents a hybrid deep learning framework that combines convolutional neural networks (CNNs) and transformers to enhance the accuracy and robustness of medical image segmentation, particularly in challenging scenarios with limited data. | IEEE Explore |
Physical Adversarial Attack Meets Computer Vision: A Decade Survey | IEEE Transactions on Pattern Analysis and Machine Intelligence | 2024 | This work introduces the concept of the “adversarial medium” as a physical carrier of perturbations, proposes a hexagonal indicator (hiPAA) to evaluate physical adversarial attacks across six key dimensions, and presents comparative results for vehicle and person detection tasks. | IEEE Explore |
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses | IEEE Transactions on Pattern Analysis and Machine Intelligence | 2023 | This paper categorizes dataset vulnerabilities and security threats, addressing attacks during training and testing phases, and explores defense mechanisms against dataset tampering. | IEEE Explore |
Physical Adversarial Attacks for Surveillance: A Survey | IEEE Transactions On Neural Networks And Learning System | 2024 | This work reviews recent advances in physical adversarial attacks for surveillance tasks, categorizes them into human-designed and deep learning–based methods, and analyzes their impact across multi-modal sensing modalities including RGB, infrared, LiDAR, and multispectral data. | IEEE Explore |
Unraveling Attacks to Machine-Learning-Based IoT Systems: A Survey and the Open Libraries Behind Them | IEEE Internet of Things Journal | 2024 | This work explores six key attack types targeting ML-based IoT systems, categorizes threat models and attack vectors, and provides supporting resources including open-source libraries for analysis and defense. | IEEE Explore |
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses | IEEE Access | 2024 | This work compiles recent adversarial attacks in object recognition based on attacker knowledge, reviews modern defenses by strategy, and discusses impacts on Vision Transformers, practical applications like autonomous driving, and related datasets and metrics. | IEEE Explore |
A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning | IEEE Access | 2023 | This survey comprehensively analyzes attacks and defenses across Deep Neural Networks, Federated Learning, Transfer Learning, and Deep Reinforcement Learning, covering diverse threat models, mitigation strategies, and evaluating across application domains, datasets, and testbeds. | IEEE Explore |
Physical Adversarial Attacks for Camera-Based Smart Systems: Current Trends, Categorization, Applications, Research Challenges, and Future Outlook | IEEE Access | 2023 | This work surveys physical adversarial attack methods for camera-based smart systems, categorizing them by target task, such as classification, detection, face recognition, semantic segmentation, and depth estimation, and evaluates their performance based on effectiveness, stealthiness, and robustness. | IEEE Explore |
Defense strategies for Adversarial Machine Learning: A survey | Computer Science Review | 2023 | This paper surveys recent adversarial attacks in object recognition, categorizing them by attacker knowledge, and reviews modern defenses by protection strategy, with discussions on Vision Transformers, evaluation datasets, metrics, and applications like autonomous driving. | Elsevier |
A systematic survey of attack detection and prevention in connected and autonomous vehicles | Vehicular Communications Elsevier | 2022 | This paper provides a comprehensive overview of attack detection and prevention strategies in connected and autonomous vehicles (CAVs). It categorizes various attack types, such as in-vehicle network and inter-vehicle communication attacks, and evaluates existing detection and prevention mechanisms. The authors highlight the need for robust security frameworks to address evolving cyber threats in CAV systems. | Elsevier |
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective | arXiV | 2024 | This paper provides a systematic survey of adversarial attacks across the machine learning lifecycle (training, deployment, and inference) offering a unified framework to understand and compare attack methods. | arXiv |
On the adversarial robustness of multi-modal foundation models | arXiv | 2023 | This paper demonstrates that imperceptible perturbations to input images can manipulate the outputs of multi-modal foundation models, such as Flamingo, leading to misleading captions that may direct users to malicious websites or disseminate false information. These findings highlight the necessity for implementing countermeasures against adversarial attacks in deployed multi-modal models. | arXiv |
Adversarial Machine Learning: A Survey | arXiV | 2023 | This paper offers a comprehensive survey of adversarial defenses across the machine learning lifecycle (pre-training, training, post-training, deployment, and inference) categorizing methods against backdoor attacks, weight attacks, and adversarial examples within a unified framework. | arXiv |
A Systematic Review of Robustness in Deep Learning for Computer Vision: Mind the gap? | arXiV | 2022 | This work systematically reviews non-adversarial robustness in deep learning for computer vision, addressing definitions, evaluation datasets, robustness metrics, and key defense strategies. | arXiv |