Loading
Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

NIPS 2018 : Adversarial Vision Challenge (Robust Model Track)

Pitting machine vision models against adversarial attacks.


10 days left
1408
Submissions
306
Participants
37043
Views

Overview

Welcome to the Adversarial Vision Challenge, one of the official challenges in the NIPS 2018 competition track. In this competition you can take on the role of an attacker or a defender (or both). As a defender you are trying to build a visual object classifier that is as robust to image perturbations as possible. As an attacker, your task is to find the smallest possible image perturbations that will fool a classifier.

The overall goal of this challenge is to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. As of right now, modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications.

Illustration Adversarial Examples

Competition tracks

There will be three tracks in which you and your team can compete:

In this track your task is to build and train a robust model on tiny ImageNet. The attacks will try to find small image perturbations that change the prediction of your model to the wrong class. The larger these perturbations are the better is your score (see below).

Evaluation criterion

Models are scored as follows (higher is better):

  • Let M be the model and S be the set of samples.
  • We apply the five best untargeted attacks on M for each sample in S.
  • For each sample we record the minimum adversarial L2 distance (MAD) across the attacks.
  • If a model misclassifies a sample then the minimum adversarial distance is registered as zero for this sample.
  • The final model score is the median MAD across all samples.
  • The higher the score, the better.

The top-5 attacks against which submissions are evaluated are fixed for two weeks at a time after which we evaluate all current submissions to determine the new top-5 attacks for the upcoming two weeks.

Timeline

(tentative).
* June 25th, 2018 : Challenge begins
* November 1st : Final submission date
* November 15th : Winners Announced

Submissions

To make a submission, please follow the instructions in this GitLab repository: https://gitlab.crowdai.org/adversarial-vision-challenge/nips18-avc-model-template

Fork the above template repository in GitLab and follow the instructions stated in the README.md. You need to have a crowdAI-account and sign in to GitLab using this account. In the README you will also find links to multiple fully functional examples.

Organizing Team

The organizing team comes from multiple groups — University of Tübingen, Google Brain, EPFL and Pennsylvania State University.

The Team consists of:
* Wieland Brendel
* Jonas Rauber
* Alexey Kurakin
* Nicolas Papernot
* Behar Veliqi
* Sharada P. Mohanty
* Marcel Salathé
* Matthias Bethge

Sponsors

Amazon AWS Paperspace

Partners

Rules

  • Betghelab and Google Brain employees can participate but are ineligible for prizes
  • participants are required to release the code of their submissions as open source to be eligible for the final scoring
  • any legitimate input that is not classified by a model will be counted as an adversarial
  • if an attack fails to produce an adversarial, we will register a worst-case adversarial instead
  • all classifiers must be stateless and act on one image at a time
  • the decision of each classifier must be deterministic
  • attacks are allowed to query the model on self-defined inputs up to 1.000 times / sample
  • each model has to process one image within 40ms on a K80 GPU (excluding initialization and setup which may take up to 100s)
  • each attack has to process a batch of 10 images within 900s on a K80 GPU

Prizes

  • $15.000 worth of Paperspace cloud compute credits: The top-20 teams in each track (defense, untargeted attack, targeted attack) as of 28. September will receive 250$ each.

Resources

Contact Us

We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at :