crowdAI is shutting down - please read our blog post for more information
Building Missing Maps with Machine Learning
By Humanity & Inclusion
over 1 year ago
First of all big thanks to the organizers for such an interesting and impactful competition.
We at minerva.ml would like contribute to this community by documenting our progress all the way till the end of the mapping challenge.
We will post everything about our approach here.
Furthermore all the code is freely available on this github repo https://github.com/minerva-ml/open-solution-mapping-challenge .
On the current master you can see a very basic pipeline that was build to prepare the ground for the interesting stuff.
For those of you that are interested in seeing what we are working on at the moment just go to the Projects tab of that repo and see which Issues we are solving. That is right, we treat feature_requests and bugs as tasks and solve them as they come.
What it means for you is that if there is an issue with our code that You encounter or You feel that there is a feature that we should implement just post an Issue and we will look into it.
We encourage You to join us in development too. Just take an Issue, solve it and send a PR.
Almost forgot. Good luck everyone!
over 1 year ago |
In this update (followed by code update and solution-2 branch)
We are exploring an approach that scored quite well in Data Science Bowl 2018 where:
- the masks are eroded pre training
- model is trained on eroded masks
- outputs are dilated
They claimed that it made it possible to get cleaner boundaries between close objects.
Also since the metric is precision for objects with higher than 0.5 IoU we will check how much can one pump-up the score by doing the following:
- erode the masks heavily. Perfect scenario would be to have masks that are central 60-70% of building area
- train the model on heavily eroded masks
This approach should lead to high precision building center precision (of course recall should drop). Remember that having central 51% of the building area predicted would give the perfect 1.0 score.
We have also added support for multi-gpu and evaluation calculation in chunks.
A few experiments are training as we speak so we should get better scores by the end of this week.