This repository ensures open access to research outputs including datasets and code for secure and resilient AI, making them available for reuse, verification, and further development.
Category | Title | Description | Link |
---|---|---|---|
Adversarial Defences | Free Adversarial Training | This repository provides codes for training and evaluating the models on the ImageNet dataset. The implementation is adapted from the official PyTorch repository. | Github |
Adversarial Defences | Friendly Adversarial Training | This repository provides codes for friendly adversarial training (FAT). | Github |
Adversarial Defences | Pytorch Adversarial Training CIFAR | This repository provides simple PyTorch implementations for adversarial training methods on CIFAR-10. | Github |
Adversarial Defences | Uptane – Secure Software Updates for Vehicles | Uptane is an open-source framework designed to ensure the authenticity of software updates in vehicles, even in adversarial environments. It’s particularly relevant for mitigating sybil attacks targeting over-the-air (OTA) updates. | Uptane |
Adversarial Attacks | PyTorchFI | PyTorchFI is a runtime perturbation tool for deep neural networks (DNNs), implemented in PyTorch. PyTorchFI enables users to perform perturbation on weights or neurons of a DNN during runtime. | Github |
Robustness Evaluation | Adversarial Robustness Toolbox (ART) | ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks (TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types (images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition, generation, certification, etc.). | Github |
Robustness Evaluation | MadryLab – Robustness package | This is a package that students in the MadryLab created to make training, evaluating, and exploring neural networks flexible and easy. We use it in almost all of our projects (whether they involve adversarial training or not!) and it will be a dependency in many of our upcoming code releases. | Github |
Robustness Evaluation | Robustness of Mamba | This work provides a comprehensive evaluation of Vision State Space Models (VSSMs) under natural and adversarial perturbations across various visual tasks, revealing their strengths and limitations compared to traditional architectures. | Github |
Simulator | CARLA-GeAR | This code allows dataset generation for a systematic evaluation of the adversarial robustness of custom models for 4 different tasks: Semantic Segmentation (SS), 2D object detection (2DOD), 3D Stereo-camera Object Detection (3DOD), and Monocular Depth Estimation (depth). The use of CARLA simulator allows photo-realistic rendering of the meshes, and full control of autonomous driving environment. Hence, it is possible to build several datasets that include surfaces on which an adversary might attach physically-realizable adversarial patches to change the network prediction. | Github |