Ants as tools to understand society
By Penn State University
over 2 years ago
It seems that we have both the train.csv file which contains coordinates of ants, and the raw image file. Is the task inferring the trajectory of ants using only train.csv?
over 2 years ago |
I have the same thought. Is there an feasible approach by using train.csv only?
over 2 years ago |
@eulerdata @LiberiFatali : While it would indeed be an interesting approach if you guys disregard the images totally, but the goal of the challenge was to figure out the coordinates of the individual ants from the corresponding frame images. The presence of the coordinates for the ants in the training set serves a two fold purpose :
* helps to train your model, if you are using an approach which requires some form of training data to estimate the position of the visible ants
* guess the position of the ants which are not visible in certain frames by smoothing the data across multiple frames (or estimate the best smoothing approach for the same task).
@spMohanty: thanks for clarification.
These are the text in the overview:
“ To ensure that the students’ tracking data is as exact and repeatable as possible, we chose to track the neck area of each ant (see image 2) using customized Python code. We were able to obtain x and y coordinates (unit is pixels) for the location of all individuals at each second of the observation period. “
Is that Python code downloadable or is there a simple example? So we would know how to extract neck coordinates after detect the ant.
@LiberiFatali : Thanks for pointing it out. And sorry for the confusion created. We have updated the text to more accurately represent the data collection procedure.
The labels in the data were not collected using an automated algorithm, but by manual labelling by human annotators. The “customized Python code” mentioned in the previous description of the process was to facilitate the process of manual data collection by splitting a video into its frames, and recording the x,y coordinates of the labels marked by the human annotators.
In the new description, we mention the reason why we chose the coordinate of the neck of the ant, instead of the barcode; it was to minimize the variance in the data between multiple human annotators as its easily recognisable and is small enough for all volunteers to approximately mark the same spot.
Most approaches for solving this problem will attempt to localize the location of the barcode instead, as a first step. And if all the participants just localize the barcode effectively, then the relative scores can still help estimate which submissions work better than others.
As a next step, participants might try to find the orientation of the ant, and then try to guess the location of the “neck” of the ant based on custom heuristics to minimise the score even further.
But in any case, just being able to localize the barcodes would be a great first step to start with.
Hope this answers your questions.
@spMohanty: thanks for reply.
I have tried to submit and there’s this error in my submit entry: “Submission does not seem to have the information for all the ant_id and frame_id pairs.”
So I check and see that there are 511201 rows (including headers) in Testing Dataset (0e07c3ff-55f0-409b-bc8a-c231da94adbc_test.csv).
My submitted .csv also included 511201 rows (including headers).
Then I load ant_id and frame_id pairs of “0e07c3ff-55f0-409b-bc8a-c231da94adbc_test.csv” to setA, ant_id and frame_id pairs of my submitted .csv to setB. These two sets A and B are equal (setA == setB returns True).
Is there special format for submitted .csv that I should follow?
over 2 years ago |
We have updated the grader to map submissions to the answer irrespective of the order of ant-id / frame-id pairs, and your submission is re-graded.
@LiberiFatali : Well I personally made some submissions, and just adapting the test.csv after loading them as a pandas dataframe seems to work for me !! How exactly are you creating your submission.csv ?
@spMohanty: My submission was already re-graded. It’s ok now, I guess :)