Adversarial Robustness Toolbox v1.13 releases: crafting and analysis of attacks and defense methods for machine learning models
Adversarial Robustness Toolbox
Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats and helps to make AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately modified to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defenses and test them with adversarial attacks.
Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. The attacks implemented in ART allow creating adversarial attacks against Machine Learning models which are required to test defenses with state-of-the-art threat models.
Supported attack and defense methods
The Adversarial Robustness Toolbox contains implementations of the following attacks:
- Deep Fool (Moosavi-Dezfooli et al., 2015)
- Fast Gradient Method (Goodfellow et al., 2014)
- Jacobian Saliency Map (Papernot et al., 2016)
- Universal Perturbation (Moosavi-Dezfooli et al., 2016)
- Virtual Adversarial Method (Moosavi-Dezfooli et al., 2015)
- C&W Attack (Carlini and Wagner, 2016)
- NewtonFool (Jang et al., 2017)
The following defense methods are also supported:
- Feature squeezing (Xu et al., 2017)
- Spatial smoothing (Xu et al., 2017)
- Label smoothing (Warde-Farley and Goodfellow, 2016)
- Adversarial training (Szegedy et al., 2013)
- Virtual adversarial training (Miyato et al., 2017)
The details of the work from IBM research can be found in the research paper. The ART toolbox is developed with the goal of helping developers better understand
- Measuring model robustness
- Model hardening
- Runtime detection
Changelog v1.13
Added
- Added
CutOut
data augmentation as preprocessor in Numpy, TensorFlow and PyTorch (#1850) - Added
MixUp
data augmentation as preprocessor in Numpy, TensorFlow and PyTorch (#1885) - Added
CutMix
data augmentation as preprocessor in Numpy, TensorFlow and PyTorch (#1910) - Added regression estimator for black-box scenario (#1930)
- Added additional model support for shadow models (#1930)
- Added Numpy-based data generator to support very large datasets (#1934
- Added object detection estimator for Faster-RCNN in TensorFlow v2 (#1951)
- Added DP-InstaHide training for classification with differentially private data augmentations (#1956)
- Added Interval Bound Propagation for certified classification in PyTorch (#1965)
Fixed
- Fixed unexpected shape in
art.utils.load_cifar10
for loading raw dataset (#1962) - Fixed bug to return correct best poisoning indices in
SleeperAgentAttack
(#1955)
Download && Tutorial
Copyright (C) IBM Corporation 2018