Loading
Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

NIPS 2018 : Adversarial Vision Challenge (Robust Model Track)

Pitting machine vision models against adversarial attacks.


Starting soon
0
Submissions
74
Participants
9154
Views

Overview

Welcome to the Adversarial Vision Challenge, one of the official challenges in the NIPS 2018 competition track. In this competition you can take on the role of an attacker or a defender (or both). As a defender you are trying to build a visual object classifier that is as robust to image perturbations as possible. As an attacker, your task is to find the smallest possible image perturbations that will fool a classifier.

The overall goal of this challenge is to facilitate measurable progress towards robust machine vision models and more generally applicable adversarial attacks. As of right now, modern machine vision algorithms are extremely susceptible to small and almost imperceptible perturbations of their inputs (so-called adversarial examples). This property reveals an astonishing difference in the information processing of humans and machines and raises security concerns for many deployed machine vision systems like autonomous cars. Improving the robustness of vision algorithms is thus important to close the gap between human and machine perception and to enable safety-critical applications.

Illustration Adversarial Examples

Competition tracks

There will be three tracks in which you and your team can compete:

In this track your task is to build and train a robust model on tiny ImageNet. The attacks will try to find small image perturbations that change the prediction of your model to the wrong class. The larger these perturbations are the better is your score (the exact scoring formula will be published soon).

Evaluation criterion

Models are scored as follows (higher is better):

  • Let M be the model and S be the set of samples.
  • We apply the five best untargeted attacks on M for each sample in S.
  • For each sample we record the minimum adversarial L2 distance (MAD) across the attacks.
  • If a model misclassifies a sample then the minimum adversarial distance is registered as zero for this sample.
  • The final model score is the median MAD across all samples.
  • The higher the score, the better.

The top-5 attacks against which submissions are evaluated are fixed for two weeks at a time after which we evaluate all current submissions to determine the new top-5 attacks for the upcoming two weeks.

Timeline

(tentative).
* June 25th, 2018 : Challenge begins
* November 1st : Final submission date
* November 15th : Winners Announced

Organizing Team

The organizing team comes from multiple groups — University of Tübingen, Google Brain, EPFL and Pennsylvania State University.

The Team consists of:
* Wieland Brendel
* Jonas Rauber
* Alexey Kurakin
* Nicolas Papernot
* Behar Veliqi
* Sharada P. Mohanty
* Marcel Salathé
* Matthias Bethge

Sponsors

Amazon AWS

Partners

Prizes

To be announced

Resources

A starter kit has currently being prepared and will explain all the nuts and bolts required to get started in the challenge. Stay tuned!

Contact Us

We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at :