Welcome to the Adversarial Vision Challenge, one of the official challenges in the NIPS 2018 competition track. In this competition you can take on the role of an attacker or a defender (or both). As a defender you are trying to build a visual object classifier that is as robust to image perturbations as possible. As an attacker, your task is to find the smallest possible image perturbations that will fool a classifier.
The overall goal of this challenge is to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. As of right now, modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications.
There will be three tracks in which you and your team can compete:
This track is very similar to the untargeted attacks track. The only difference is that here an adversarial perturbation is not defined as making the model predict any wrong label but it has to get the model to predict a particular (wrong) label.
Attacks are scored as follows (lower is better):
- Let A be the attack and S be the set of samples.
- We apply attack A against the best five models for each sample in S.
- If an attack fails to produce a (targeted) adversarial for a given sample, then we register a worst case distance (distance of the sample to a uniform grey image).
- The final attack score is the median L2 distance across samples.
The top-5 models against which submissions are evaluated are fixed for two weeks at a time after which we evaluate all current submissions to determine the new top-5 models for the upcoming two weeks.
To make a submission, please follow the instructions in this GitLab repository: https://gitlab.crowdai.org/adversarial-vision-challenge/nips18-avc-attack-template
Fork the above template repository in GitLab and follow the instructions stated in the README.md. You need to have a crowdAI-account and sign in to GitLab using this account. In the README you will also find links to multiple fully functional examples.
* June 25th, 2018 : Challenge begins
* November 1st : Final submission date
* November 15th : Winners Announced
- Betghelab and Google Brain employees can participate but are ineligible for prizes
- participants are required to release the code of their submissions as open source to be eligible for the final scoring
- any legitimate input that is not classified by a model will be counted as an adversarial
- if an attack fails to produce an adversarial, we will register a worst-case adversarial instead
- all classifiers must be stateless and act on one image at a time
- the decision of each classifier must be deterministic
- attacks are allowed to query the model on self-defined inputs up to 1.000 times / sample
- each model has to process one image within 40ms on a K80 GPU (excluding initialization and setup which may take up to 100s)
- each attack has to process a batch of 10 images within 900s on a K80 GPU
- $15.000 worth of Paperspace cloud compute credits: The top-20 teams in each track (defense, untargeted attack, targeted attack) as of 28. September will receive 250$ each.
- Gitter Channel : crowdAI/nips-2018-adversarial-vision-challenge
- Discussion Forum : https://www.crowdai.org/challenges/nips-2018-adversarial-vision-challenge/topics
We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at :