Adversarial Robustness Toolbox v0.5 releases: crafting and analysis of attacks and defense methods for machine learning models

Adversarial Robustness Toolbox

The Adversarial Robustness Toolbox (ART), an open source software library, supports both researchers and developers in defending deep neural networks against adversarial attacks, making AI systems more secure. Its purpose is to allow rapid crafting and analysis of attack and defense methods for machine learning models.

The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers. It is designed to support researchers and AI developers in creating novel defense techniques and in deploying practical defenses of real-world AI systems. For AI developers, the library provides interfaces that support the composition of comprehensive defense systems using individual methods as building blocks.

Supported attack and defense methods

The Adversarial Robustness Toolbox contains implementations of the following attacks:

The following defense methods are also supported:

The details of the work from IBM research can be found in the research paper. The ART toolbox is developed with the goal of helping developers better understand

  • Measuring model robustness
  • Model hardening
  • Runtime detection

Changelog v0.5

This release of ART adds two new evasion attacks, provides some bug fixes, as well as some new features, like access to the learning phase (training/test) through the Classifier API, batching in evasion attacks and expectation over transformations.


  • Spatial transformations evasion attack (class art.attacks.SpatialTransformations)
  • Elastic net (EAD) evasion attack (class art.attacks.ElasticNet)
  • Data generator support for multiple types of TensorFlow iterators
  • New function and property to the Classifier API that allow to explicitly control the learning phase (train/test)
  • Reports for poisoning module
  • Most evasion attacks now support batching, this is specified by the new parameter batch_size
  • ExpectationOverTransformations class, to be used with evasion attacks
  • Parameter expectation of evasion attacks allows to specify the use of expectation over transformations


  • Update list of attacks supported by universarl perturbation
  • PyLint and Travis configs


  • Indexing error in C&W L_2 attack (issue #29)
  • Universal perturbation stop condition: attack was always stopping after one iteration
  • Error with data subsampling in AdversarialTrainer when the ratio of adversarial samples is 1

Download && Tutorial

Copyright (C) IBM Corporation 2018