Hackathon JCDecaux

Organized by hackathon-by-jcdecaux - Current server time: Nov. 19, 2018, 2:10 p.m. UTC

First phase

Training
May 19, 2017, 6 p.m. UTC

End

Competition Ends
May 20, 2017, 3:09 p.m. UTC

Welcome!

Please join our group of Slack: click here to join

You can also find the general presentation of this hackathon on our website Hackathon-by-JCDecaux.

Overview

The hackathon will take place into two phases.

The first phase: TRAINING

  • Start time: 19/05/2017 19:00
  • End time: 20/05/2017 15:00
  • During the training phase, you will design and train your algorithm with a list of clients' briefs and other necessary data that we provided.
  • You can upload your answer at any time during this phase to get a feedback of your algorithm. (You can read the Evaluation page for more information about how to submit the result)

 

The second phase: COMPETITION

  • Start time: 20/05/2017 15:00
  • End time: 20/05/2017 16:30
  • A new list of clients' briefs will be provided in this phase.
  • Once the second phase starts, you will have 90 minutes to generate your answer and upload it to CodaLab.
  • After 16:30, no more submission will be accepted.
  • You can upload more than one answer and choose the best for the final evaluation.

Evaluation Criteria

This is the page that tells you how competition submissions will be evaluated and scored.

Submissions

For submitting the results, you need to follow these steps:

1 Generate the "answer.json" file

  • The answer will have to be in the form of one file answer.json
  • An example of the "answer.json" is provided in the following

{ "(week-1, order-1)": [ "01043.00003.02.01.01", "01049.00001.01.01.01", "01053.00021.02.01.01" ], "(week-1, order-3)": [] }

  • The key in the json "(week-i, order-j)" means that the brief arrives at week i and in order j. The key and the brief is one-to-one, that is, every brief has one unique key and every key refers to a unique brief.
  • The key has to be exactly the same form as above (there is a space after comma!). If one brief's key hasn't been detected by our evaluation program, this brief will be taken as rejected.
  • Each key refers to a list, which contains your selected panels' reference.


2 Write a "readme.txt" file to explain the method used to generate the answer. It should at least contain:

  • Your algorithm: what is your chosen strategy and a brief description of your implementation: data structure, coding language, ect.
  • Your algorithm's average running time for the training phase (after how many minutes it gives an answer).
  • The literature reference (any that helps).
  • There are no strict requests for "readme.txt" but the group should be ready to respond any question about the your strategy, implementing details and the codes.


3 Prepare your code folders, save it and it will be demanded at a later time.

Files to submit

 

  • The only file to submit is a zip archive which contains the file "answer.json" and "readme.txt" (and "XXX_YYY_codes.zip" for the competition phase).
  • After that you have generated all demanded files, please compress them into zip. Please note that this zip file shouldn't contain any folder but only files. The file name "answer.json" will be checked by the program and any other name is not acceptable.
  • The name of the zip archive to upload should be "[user name]" + "_" + "[group name]" + ".zip"
  • During the competition phase, we will demand you to submit your codes as well. The code file will be a zip named as "[user name]" + "_" + "[group name]"  + "_" +"codes" + ".zip"
  • The submission should look like this:

           XXX_YYY.zip

                 |

                 |------- answer.json

                 |------- readme.txt

                 |------- XXX_YYY_codes.zip (only for the competition phase)

 

How to submit on Codalab

For submitting the zip file to Codalab, here is a brief:

 

  • Find this competition -> Participate -> Submit/View Results ->
  • Click "Submit" button -> Choose the zip file you want to upload, click "ok" -> wait until the file is uploaded and wait until the program finish grading, this may take 1 to 2 minute. ->
  • Click the refresh button to see if the grading program has finished.
  • If the "STATUS" becomes "Finished", it means that your grade is successfuly evaluated. You can then click "Download evaluation output from scoring step" and review your score. You can also click "View scoring output log" to view the results printed out by the grading program.
  • If the "STATUS" becomes "Failed", it means that our grading program didn't succeed in grading your "answer.json", view the scoring output log for any possible information and demand help from the "mentors".
  • Don't forget to click "Submit to Leaderboard" if the "STATUS" becomes "Finished" so that we can see and evaluate it. Otherwise, your results won't get the final score.

 

How to understand the numbers in "scores.txt"

The numbers in the scores.txt are not your final score!

 

  • If your STATUS is Finished, you can download the evaluation output, then you can see two files: the "metadata" and the "scores.txt".
  • The exit code in metadata should be 0 if all is good.
  • There are five numbers in the "scores.txt", an example is the following:

 

 

satisfied: 73.0
imprediff: 4.877
revenue: 17.726
interscore: 8.361
intrascore: 12.43

 

  • "satisfied" means how many briefs your selection has satisfied (that is, with the impression offered larger than or equal to the impression demanded for the given cible): 73 briefs are satisfied in our case
  • "imprediff" equals to the average of impression difference (% of demanded impression) of the offered minus demanded for all satisfied brief, which can be regarded as a measure of wasted resources: the larger it is, the more is wasted.
  • "revenue" equals to the sum of revenue (in million euros) for each week (which equals to the sum of revenue for each brief): the larger, the better. In our case, it's 17.726 million euros.
  • "interscore" is a measure of penalty for uneven dispersion of panels geographically between the cities of France: since it's penalty, the smaller, the better.
  • "intrascore" is similar to interscore, except that it's a penalty for noneven dispersion within each city.

 

The final score is based on these measures and also on your presentation, strategy, and code implementation's performance.

(and possibly economic sense, who knows)

Terms and Conditions

Please read carefully our terms and conditions before participating.

Training

Start: May 19, 2017, 6 p.m.

Description: Train your best algorithm :)

Competition

Start: May 20, 2017, 1 p.m.

Description: Let's rock!

Competition Ends

May 20, 2017, 3:09 p.m.

You must be logged in to participate in competitions.

Sign In