Loading
Cookies help us deliver our services. By using our services, you agree to our use of cookies. Learn more

crowdAI is shutting down - please read our blog post for more information

Train Schedule Optimisation Challenge

Optimizing train schedules


Completed
209
Submissions
444
Participants
29671
Views

Challenge Feedback

Posted by jordiju about 1 year ago

Hi All

From the admin team, thanks so much for participating in this challenge! We are thrilled with the results, so for us, it has already been a huge success.

At the same time, we know not everything was perfect and it was a hard problem to even get started with. To help us improve and make the next SBB challenges (and there will be more) more accessible, fairer, better, we welcome your feedback on this current challenge.

Feel free to share with us anything you feel could have been done better. We will do our best to incorporate these learnings in future challenges.

Of course, positive feedback is also welcome.

In the name of everybody involved organizing this challenge, THANKS AGAIN!

2

Posted by jordiju  about 1 year ago |  Quote

…if it was not clear, we meant that you share your feedback directly in this thread. That is easier and more durable than if you send us emails.

Posted by Palstek  about 1 year ago |  Quote

Holy moly forgot about the line-breaking in markdown. Second try:

I did like the challenge. Very close to reality and interesting. Also you guys always replied pretty fast when there were problems. However there are a couple things:

  • The most important point to improve: At least at the top, this challenge turned into a Instance-5-overfitting-challenge (I mean, I don’t know what the competitors did, my solution is still pretty general IMO but of course everyone optimized towards that very problem). For one it would have been much more interesting to not only have one non-perfect solvable solution and more importantly:

  • Training-Set==Validation-Set is really boring. Would have been much cooler, if you had given us say 9 instances but then we would have needed to upload the algorithm and it would have been tested on similar yet different instances. This would of course require some restrictions regarding coding language but I would assume most people here worked with the same 2-3 languages.

  • On a side-note: Data format. As I understand this is kind of given by the system within SBB. Having the graphs as adjacency lists still would make it so much easier to work with. Moreover the data is inconsistent. E.g. the “connections”-tag sometimes is not there at all, sometimes it is there but empty, sometimes it is there and “null”, sometimes it is there and a list containing null and then sometimes it actually contains data. It’s just a bit of a pain in the ass to catch all these options but of course not technically difficult.

  • On a side-note 2: Validator. Adding up some weighted penalties is not that hard, is it? I hope whoever programmed that one is not responsible for the SBB train-collision-avoidance systems ;-)

Cheers,

3

Posted by jordiju  about 1 year ago |  Quote

Training-Set==Validation-Set is really boring

Thanks @Palstek ! Yes, in hindsight this is probably our biggest regret as well. The initial rationale was to leave as much freedom for technologies, languages, etc. as possible. But the tradeoff in the end is probably not worth the gain because, as you say, most people would still choose common tools. Also there were timeline restrictions that “encouraged” us to go for the easier setup we chose.

Next time we would surely let the algorithms run against an unknown test set.

Posted by jordiju  about 1 year ago |  Quote

On a side-note 2: Validator. Adding up some weighted penalties is not that hard, is it? I hope whoever programmed that one is not responsible for the SBB train-collision-avoidance systems ;-)

No they’re not, but just for fun, let me quote @EmmetBrown and @christophBuehlmann … I hope you feel guilty :D

Posted by hur  about 1 year ago |  Quote

Commenting on behalf of @secret_squirrel group: It was a really interesting challenge. Thank you everyone involved in preparing it as well as the participants. I must say that we were impressed by results and we are looking forward to seeing what the other people did. I think the structure of the input file was confusing (e.g. having service intentions and routes although they represent the same thing) but I understand that this is the structure used internally at SBB and changing it just for the challenge would be a trouble. Moreover the description of the input data file was overcomplicated. It took us, 5 people, a couple of hours to comprehend what was what. The validator could also be improved. I don’t exactly remember the case but it was throwing German error messages at some point. Last but not least (and yes yet again), of course I won’t miss the chance to complain about the 32-core machine of yours :) If there is no difference between the computation time of a 32-core and 8-core computers then you should have set the time limit with 8-core since it is more conventional. In the end it wouldn’t have changed anything for us but would definitely give us a peace of mind.

3

Posted by Palstek  about 1 year ago |  Quote

Another thing I would like to mention is the prize money. I think most people did not take part because of financial motivations but knowing the hourly rates of programmers in Switzerland, the prize money is not exactly what I would consider generous. Especially considering the fact that you expect us to publish our work open-source and it being very close to your actual use-case.

As a matter of fact, I know of several talented programmers who did not take part or stopped their efforts early-on because they felt like getting exploited.

Possible improvements would either be to offer more generous prize money (and not only to the top 3) or preferably to leave the rights of the code with the creator (and possibly negotiate deals after the competition).

2

Posted by jordiju  about 1 year ago |  Quote

Commenting on behalf of @secret_squirrel group:

Thanks, good points

Posted by ms123  about 1 year ago |  Quote

I agree with all comments here, especially the ones of @Palstek regarding test instances=evaluation instances, and the prize money.

Regarding the prize money, maybe aside from the Top 3 money, you could also offer some (naturally a smaller amount) of money to people first at the leaderboard at the end of each week (or some criterion in this spirit). This would result in i) people who made a good, but maybe not Top 3 effort maybe getting at least some money, if they had a good idea, which could be quickly implemented (and not just overfitted to the data), as at least from my experience, simple approaches may be more useful in practice as over-engineered approaches (which may be necessary to win a challenge, especially, when test instances=validation instances), and ii) it would also maybe encourage people to submit their results earlier, which could be helpful to other participants to estimate, if it is still worth an effort to continue the competition, and also to see, what the potentially best results for the instances are

1

Posted by Must  about 1 year ago |  Quote

Hi everyone,

I also share most comments above. Having some hidden instances to evaluate the genericity and adaptability of the algorithms would probably reduce the problem of instance overfitting. This would imply a different type of competition, with a reduced ability of auto-evaluation. I really liked this challenge because it was easy to evaluate the algorithm’s performance at any time. I also liked the progessivity in the instance difficulty. Good results on the easy instances could be obtained quite fast, this was a motivation to go further.

In terms of improvements, the evaluator API could have provided the complete score of solutions, along with the penalty details, instead of some warning/error messages. I also find that the scenario data format could have been simpler for us to handle, with a graph description of the routes as suggested by @Palstek and a more integrated vision of service intentions and associated routes. Finally, given the last results obtained by the competitors there could have been more than one “really difficult” instance (instance 5), but I understand this can be difficult to guess beforehand the difficulty of all instances.

Anyway, this was a great experience, thank you to the organizers and congratulations to the winners !

1

Posted by LeoB  about 1 year ago |  Quote

First let me thank the organizer for making this competition possible. I learned a lot!

Most points I see as critical where made before. I just want to emphasize a few points. First to the good points: I found it great how fast we got good feedback from the organizer. Also, the different problem instances made it easy to work on the problem. The description on github were very helpful.

On the other hand, the input format (as mentioned by others before) is really strange. And until today I’m not hundred percent sure if I understood everything correctly. Simplifying the “domain model” would make it easier to start in this competition.

Then I’m not sure if the competition format is at all suitable for this kind of problem. As one can easily calculate the score locally, there is in general no need to submit a solution early which makes the leaderboard quite boring. Also there is no possibility to ensure that solutions are not overfitted or take more resources than specified (at least not until the end of the competition).

1