Prevention of AI misguided, IBM launches open source Adversarial Robustness Toolbox
In order to prevent AI models from being misinformed and producing erroneous judgments, researchers need to go through continuous simulation attacks to ensure that AI models are not deceived. The IBM research team recently opened the detection model and the Adversarial Robustness Toolbox, a toolkit to combat attacks, to help developers strengthen their defensiveness against deep neural network attacks and make AI systems more secure.
Adversarial Robustness Toolbox is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. It provides an implementation for many state-of-the-art methods for attacking and defending classifiers.
The Adversarial Robustness Toolbox contains implementations of the following attacks:
- Deep Fool (Moosavi-Dezfooli et al., 2015)
- Fast Gradient Method (Goodfellow et al., 2014)
- Jacobian Saliency Map (Papernot et al., 2016)
- Universal Perturbation (Moosavi-Dezfooli et al., 2016)
- Virtual Adversarial Method (Moosavi-Dezfooli et al., 2015)
- C&W Attack (Carlini and Wagner, 2016)
- NewtonFool (Jang et al., 2017)
The following defense methods are also supported:
- Feature squeezing (Xu et al., 2017)
- Spatial smoothing (Xu et al., 2017)
- Label smoothing (Warde-Farley and Goodfellow, 2016)
- Adversarial training (Szegedy et al., 2013)
- Virtual adversarial training (Miyato et al., 2017)
Adversarial Robustness Toolbox is available on Github.
In recent years, AI has made many breakthroughs in cognitive issues. Many tasks in life have begun to incorporate AI technology, such as identifying objects in images and videos, transliterating text, and machine translation. However, if the deep learning network is influenced by a designed interference signal, it is easy to produce erroneous judgments. This type of interference is difficult for humans to perceive, and interested people may use such weaknesses to mislead the judgment of the AI model and use it improperly. behavior.
The Adversarial Robustness toolkit currently provides enhanced defensibility for computer vision, provides developers with new defense technologies, and defends against malicious misleading attacks when deploying AI models. The toolbox was written in Python because Python It is the most commonly used language for building, testing, and deploying deep neural networks. It contains methods for combating and defending against attacks.
Firstly, developers can use this toolbox to detect the robustness of deep neural networks. They mainly record the output of the model for different interferences. Then, they reinforce the AI model through the attack data set and finally mark the attack mode and signal to prevent the model. Causes wrong results due to interference signals.
The Adversarial Robustness toolkit currently supports TensorFlow and Keras. In the future, it is expected to support more frameworks, such as PyTorch or MXNet. At this stage, the defense is mainly to provide image recognition. In the future, more versions will be added, such as speech recognition and text. Identify or time series and so on.