RecoGym Challenge

Organized by Criteo - Current server time: Oct. 22, 2019, 6:53 a.m. UTC

Current

Development
Oct. 1, 2019, midnight UTC

Next

Final
Nov. 30, 2019, midnight UTC

End

Competition Ends
Dec. 1, 2019, midnight UTC

RecoGym Challenge: 100 Products


Can you build a recommender system that optimises Click-Through Rate?

 

For more information go here.

 

RecoGym Competition Evaluation

Your agent is evaluated by Click-Through Rate value: the large value is the better score.

Challenge Rules:

#1.There will be a winner for each of the two tasks. Prize money is 2000 eurosfor the winning team for the  challenge involving 10 000 products, and 1000 eurosfor the winning team for the challenge involving 100 products. Of course, if the same algorithm wins both of the tasks, all of the prize money will  go to the respective team.

#2. We will evaluate the agents by their resultingClick-Through Rate  (CTR) over a range of RecoGym configurations that  are unknownto the participants.  What you should know is that we are interested in generalisation from small samples, so we will be testing in regimes with relatively small numbers of users (less than 1000). We expect the winning entry to need to make sophisticated choices with respect to a) creating a representation of the user context, b) combining organic and bandit signals and c) handling the bandit signal.

#3.Any attempt to upload malicious code will result in disqualification.

#4.Criteo employees, interns, PhD students and their families are not eligible to participate.

#5.We are providing a Docker image that reflects our testing environment and that contains pytorch, TensorFlow and scikitlearn. We will make our best efforts to run all code, but cannot guarantee code that was not tested with Docker. Feedback about code that did not run and that was submitted sufficiently early will be provided if possible.

#6.Maximum run time for training on 1000 users must be less than 5 hourson a AWS t2.2xlargemachine

#7.A leaderboard of early entries will be maintained with periodic updates.

#8.If the judges deem it necessary, the winner will be the based on the average performance over several A/B tests.  On this basis, it is not necessarily the case that the leader on the leaderboard will be declared the winner.

Development

Start: Oct. 1, 2019, midnight

Description: Development phase: create models and submit them or directly submit results on validation and/or test data; feed-back are provided on the validation set only.

Final

Start: Nov. 30, 2019, midnight

Description: Final phase: submissions from the previous phase are automatically cloned and used to compute the final score. The results on the test set will be revealed when the organizers make them available.

Competition Ends

Dec. 1, 2019, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 qttruong 1.540
2 numericlee 1.500
3 ihtiihti 1.492