Loading
Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

crowdAI is shutting down - please read our blog post for more information

Mapping Challenge

Building Missing Maps with Machine Learning


Completed
719
Submissions
1059
Participants
37628
Views

Evaluation metrics proper?

Posted by lx709 about 1 year ago

I’m wondering if the given metrics AP_0.5 and AR_0.5 are proper for the building extraction task. Indeed, building mapping not only needs to precise detection the bounding box location but also define a precise for each building object. At this concert, I’d recommend IoU score or BF score for evaluation, most of the current papers for building extraction use these kind of metrics.

Posted by spMohanty  about 1 year ago |  Quote

We do compute the IoU before we can compute the AP_0.5. I think the first goal is to incentivize models which can roughly detect buildings. Note that this is already a pretty difficult task given the heterogeneous nature of the types of buildings at geospatially different places. A detection is counted as a correct detection when there is atleast a 50% overlap between the ground truth and the predictions. Note that we also simply use AP_0.5 and mAP_0.5 like in many similar cases. The idea here was to address the problem one difficulty level at a time. If by the end of this challenge, participants demonstrate that it is indeed very easy to roughly detect almost all the buildings, thats when in the future iterations of the challenge we will move to mAP, where we compute the AP with thresholds 05, 0.55, 0.6, …0.95, which now increases the level of difficulty a little bit more and incentivizes models to also optimize for the overall area of overlap, and when participants demonstrate that this is also an easy problem, then we will happily move to raw IoU for evaluation.

Thanks for your comments, hope my response clarifies your question.