Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

crowdAI is shutting down - please read our blog post for more information

LifeCLEF 2018 Expert

Image-based identification of plant species



Posted by IvanE over 2 years ago

What is LifeCLEF?

LifeCLEF 2018 is an evaluation campaign that is being organized as part of the CLEF initiative labs. The campaign offers several research tasks that welcome participation from teams around the world. The results of the campaign appear in the working notes proceedings, published by CEUR Workshop Proceedings (CEUR-WS.org). Selected contributions among the participants, will be invited for publication in the following year in the Springer Lecture Notes in Computer Science (LNCS) together with the annual lab overviews.

Do I have to register with CLEF in order to participate in an LifeCLEF challenge?

Normally you would have to register with CLEF, however we will handle this for you by sending the necessary data you gave us on CrowdAI directly to CLEF. No further action on your behalf is required.

What is an LifeCLEF task?

LifeCLEF is grouped into several so-called tasks, each of them tackling a different research topic. A LifeCLEF task may have have several challenges, but has only one global dataset.

  • LifeCLEF Bird => 2 challenges
  • LifeCLEF Geo => 1 challenge
  • LifeCLEF Expert => 1 challenge

How can I get access to the the data of a challenge?

Please read the Rules section in the challenge description page. There is mainly one thing you have to do: - Add additional information to your profile so you comply with the CLEF requirements

When you click on the dataset tab of a challenge you will be guided through the process in order to obtain access to the data.

Posted by d_a_konovalov  over 2 years ago |  Quote

Please clarify the submission format: do we need to submit one line per test-image? For example, 2792262;42;1;1 … Or multiple lines for each image? 2792262;42;0.9;1 2792262;47;0.1;2

Posted by d_a_konovalov  over 2 years ago |  Quote

Also “0.9” or “0,9” for the decimal separator?

Posted by AlexisJolyInria  over 2 years ago |  Quote

You need to submit one line per test-observation, each test-observation being composed of all the images with the same ObservationId. In other words, as mentioned in the description of the challenge, “the run file to be submitted has to contain as much lines as the number of predictions, each prediction being composed of an ObservationId (the identifier of a specimen that can be itself composed of several images), a ClassId, a Probability and a Rank (used in case of equal probabilities).”

Posted by AlexisJolyInria  over 2 years ago |  Quote

the decimal operator to be used is the the dot “.”

Posted by CMP  over 2 years ago |  Quote

Dear organizers, I’d like to ask the following questions:

1) how many run files can we upload and how many can be selected for the final evaluation?

2) if there should be only one line (prediction) per test observation, then the top-3 accuracy will not be used? (The Overview says: “The two main evaluation metrics will be the top-1 and top-3 accuracy”.) Also: what is the purpose of “Rank”, if we only return 1 prediction per observation?

Thank you! Milan


Posted by AlexisJolyInria  over 2 years ago |  Quote

Sorry for the confusion, you can actually have several predictions for each test-observation, one prediction per line. My answer to d_a_konovalov was just to explain that all the images with the same ObservationId have to processed jointly and should be considered as a single test-observation.

Posted by SabanciU-GTU  about 2 years ago |  Quote

Hi Milan, We are submitting 20 predictions even though the info seems to say they will look at top-1 accuracy, because last years it was different. So maybe the main ranking will be top-1, but other scores will also be measured. Berrin

Posted by SabanciU-GTU  about 2 years ago |  Quote

There are only 2072 unique observation IDs right?