Loading
Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

ImageCLEF 2019 VQA-Med


138 days left
0
Submissions
7
Participants
148
Views

Overview

Note: Do not forget to read the Rules section on this page.

Motivation

With the increasing interest in artificial intelligence (AI) to support clinical decision making and improve patient engagement, opportunities to generate and leverage algorithms for automated medical image interpretation are currently being explored. Since patients may now access structured and unstructured data related to their health via patient portals, such access also motivates the need to help them better understand their conditions regarding their available data, including medical images.

The clinicians’ confidence in interpreting complex medical images can be significantly enhanced by a “second opinion” provided by an automated system. In addition, patients may be interested in the morphology/physiology and disease-status of anatomical structures around a lesion that has been well characterized by their healthcare providers – and they may not necessarily be willing to pay significant amounts for a separate office- or hospital visit just to address such questions. Although patients often turn to search engines (e.g. Google) to disambiguate complex terms or obtain answers to confusing aspects of a medical image, results from search engines may be nonspecific, erroneous and misleading, or overwhelming in terms of the volume of information.

Challenge description

Visual Question Answering is an exciting problem that combines natural language processing and computer vision techniques. Inspired by the recent success of visual question answering in the general domain, we conducted a pilot task in ImageCLEF 2018 to focus on visual question answering in the medical domain (VQA-Med 2018). Based on the success of the inaugural edition and the huge interest from both computer vision and medical informatics communities, we will continue the task this year with enhanced focus on a nicely curated enlarged dataset. Same as last year, given a medical image accompanied with a clinically relevant question, participating systems are tasked with answering the question based on the visual image content.

Data

The data consists of (1) medical images extracted from PubMed Central articles (a subset of the ImageCLEF 2017 caption prediction task) and (2) clinical images selected from the INDIANA dataset and MedPix® (VQA-RAD).

Each image has one or more question-answer pair(s) in the training and validation sets, and has only one question in the test set.

The training, validation and test sets will be released in March 2019.

Submission instructions


As soon as the submission is open, you will find a “Create Submission” button on this page (just next to the tabs)


Further instructions on the submission format will be published soon.

Citations

Information will be posted after the challenge ends.

Evaluation

More on the evaluation criteria will be published soon.

Rules

Note: In order to participate in this challenge you have to sign an End User Agreement (EUA). You will find more information on the ‘Dataset’ tab.

ImageCLEF lab is part of the Conference and Labs of the Evaluation Forum: CLEF 2019. CLEF 2019 consists of independent peer-reviewed workshops on a broad range of challenges in the fields of multilingual and multimodal information access evaluation, and a set of benchmarking activities carried in various labs designed to test different aspects of mono and cross-language Information retrieval systems. More details about the conference can be found here .

Submitting a working note with the full description of the methods used in each run is mandatory. Any run that could not be reproduced thanks to its description in the working notes might be removed from the official publication of the results. Working notes are published within CEUR-WS proceedings, resulting in an assignment of an individual DOI (URN) and an indexing by many bibliography systems including DBLP. According to the CEUR-WS policies, a light review of the working notes will be conducted by ImageCLEF organizing committee to ensure quality. As an illustration, ImageCLEF 2018 working notes (task overviews and participant working notes) can be found within CLEF 2018 CEUR-WS proceedings.

Important

Participants of this challenge will automatically be registered at CLEF 2019. In order to be compliant with the CLEF registration requirements, please edit your profile by providing the following additional information:

  • First name

  • Last name

  • Affiliation

  • Address

  • City

  • Country

  • Regarding the username, please choose a name that represents your team.

This information will not be publicly visible and will be exclusively used to contact you and to send the registration data to CLEF, which is the main organizer of all CLEF labs

Participating as an individual (non affiliated) researcher

We welcome individual researchers, i.e. not affiliated to any institution, to participate. We kindly ask you to provide us with a motivation letter containing the following information:

  • the presentation of your most relevant research activities related to the task/tasks

  • your motivation for participating in the task/tasks and how you want to exploit the results

  • a list of the most relevant 5 publications (if applicable)

  • the link to your personal webpage

The motivation letter should be directly concatenated to the End User Agreement document or sent as a PDF file to bionescu at imag dot pub dot ro. The request will be analyzed by the ImageCLEF organizing committee. We reserve the right to refuse any applicants whose experience in the field is too narrow, and would therefore most likely prevent them from being able to finish the task/tasks.

Prizes

ImageCLEF 2019 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org). Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

Resources

Contact us

We strongly encourage you to use the public channels mentioned above for communications between the participants and the organizers. In extreme cases, if there are any queries or comments that you would like to make using a private communication channel, then you can send us an email at:

  • Asma Ben Abacha <asma.benabacha(at)nih.gov>
  • Sadid A. Hasan <sadid.hasan(at)philips.com>
  • Vivek Datla <vivek.datla(at)philips.com>
  • Joey Liu <joey.liu(at)philips.com>
  • Dina Demner-Fushman <ddemner(at)mail.nih.gov>
  • Henning Müller <henning.mueller(at)hevs.ch>

More information