See4C challenge

Organized by frenuphetu - Current server time: Jan. 21, 2021, 3:26 p.m. UTC

First phase

April 1, 2017, midnight UTC


Competition Ends
Nov. 8, 2018, 11:59 p.m. UTC

Video forecasting

This is a spatio-temporal time series forecasting competition.

Have you ever been frustrated by low quality images in teleconferences and speaker images suddenly freezing? We are asking you to help replacing missing frames that have been lost due to poor quality of transmission to improve the experience of teleconferencing.

In this mini competition we are testing the competition protocol of the planned See.4C challenge on spatio-temporal time series forecasting (which will include a different task). The data of the present competition consists of small gray level video clips with frames of only 32x32 pixels, with no sound track, sampled at 25 frames per seconds. From the past few frames, you must predict the next 8 frames. This is a competition with code submission. You will receive immediate feed-back on the leaderboard during the feedback phase. The final validation phase, which will serve to determine the winners, will be a blind test. Only one submission will be possible and the results will not be revealed until the competition is over and the results have been reviewed by the competition EXPERT PANEL.

There are no prizes for this competition. It is part of a satellite workshop to be held at the See4C workshop, April 22, 2017, Toulon, France, collocated with ICLR.

This challenge is brought to you by the See4C consortium. Contact the organizers.


During the feedback phase, the participants must submit code to predict the next 8 frames of the video, given past frames. We compute the average RMSE (root mean square error) over all predictions made at the pixel level for each frame, averaged over all predicted frames (in each phase).


To make entries, go to the "Participate" tab. You must be a Codalab user to participate and you must accept the Terms and Conditions of the challenge and the rules. The rules include all instructions found in this website.

Under the "Participate" tab, you will be able to download sample "public" data to familiarize yourself with the task, a starting kit, and a sample submission. The interface must be respected, for both code and results. Code execution time is limited on Codalab to ten minutes for the whole dataset (600 steps, each step corresponding to predicting the next 8 frames) in the "feedback" phase. Number of samples and duration of execution will be doubled for the final "validation" phase.

To create a submission, just zip the script "" together with your code (in the starting kit example you will include "", "" and the python files in "sample_code/"). IMPORTANT: zip code without directories.

There are 2 phases:

  • Phase 1: Feedback phase. Your code received on our server will see in the input directory several subdirectories:
    The data in the train/Xm2/ and train/Xm1/ directories can be used for training. The data in the adapt/ directory will be used for adaptation and testing. In all directories, samples are named Xn.h5, where n is the file index, running from 1 to N in train/Xm2/ and train/Xm1/ and from 0 to N in adapt/. N is different in each directory.
    Not all data will be available at once to your algorithm in the adapt/ directory. A first clip X0.h5 will appear, your code (via the script will be called to make predictions (saved in file Y0.h5) for the next 8 frames, which will be the first 8 frames of X1.h5. The script will then inform the server that you are ready to accept X1.h5. When this new clip appears in the adapt/ directory, your code will be called again to predict Y1.h5, etc.
    The expected sequence of query and answer videos (called "step") will be:
    input name : number of frames    output name : number of frames
    X0.h5 : 101 frames               Y0.h5   :   8 frames
    X1.h5 : 8 frames                 Y1.h5   :   8 frames
    X2.h5 : 8 frames                 Y2.h5   :   8 frames
    X3.h5 : 109 frames               Y3.h5   :   8 frames
    X4.h5 : 8 frames                 Y4.h5   :   8 frames
    X5.h5 : 8 frames                 Y5.h5   :   8 frames
    X6.h5 : 109 frames               Y6.h5   :   8 frames     

    The outputs Yn.h5 must be formatted similarly as the inputs Xn.h5. Yn.h5 is the prediction of the first 8 frames of Xn+1.h5.
    Obviously, when the ground truth Xn+1.h5 of the previous step prediction Yn.h5 appears in the adapt/ directory, you can use it for training purposes to get better at making predictions for the current step.
    The performance of your LAST code submission you make will be displayed on the leaderboard.
  • Phase 2: Final "validation" phase. The setting will be similar. Your last submission of phase 1 will be automatically forwarded, so you do not need to do anything. However, if you wish, you may also make ONE submission during the validation phase. But beware, this is a blind test. You will not get any feedback. If your submitted code fails, you will not get another chance. Your code will see the following subdirectories:
    Similarly to the previous phase, data in the train/ sub-directories is for training and data in the adapt/ directory for adaptation and testing. Your performance on the test set will appear on the leaderboard when the organizers finish checking the submissions.

This challenge is brought to you by the See4C consortium. Contact the organizers.


  • General Terms: This CONTEST is governed by See.4C TERMS AND CONDITIONS, and the specific RULES set forth. Capitalized terms are defined in the TERMS AND CONDITIONS.
  • Eligibility: By making entries into the CONTEST, THE PARTICIPANTS confirm that they are qualified to make entries according to the TERMS AND CONDITIONS, including that they are at least 14 years of age. They confirm that they identified themselves by registering with a valid name and email and that they agree with the TERMS AND CONDITIONS and the RULES. THE ORGANIZERS, SPONSORS, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or THE CONTEST design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several ENTRIES in THE CONTEST and request to have them evaluated, provided that they notify THE ORGANIZERS of their conflict of interest. If a disqualified person submits an ENTRY, this ENTRY will not be part of the final ranking and does not qualify for PRIZES. THE PARTICIPANTS should be aware that THE ORGANIZERS reserve the right to evaluate for scientific purposes any ENTRY made in THE CONTEST, whether or not it qualifies for PRIZES.
  • Anonymity and privacy: The participants/teams can elect to remain anonymous to the outside world during the feedback phase by using a pseudonym as Codalab ID. However, if they win and elect to accept their PRIZE, their identity will be revealed in accordance with the TERMS AND CONDITIONS. Our privacy policy is outlined in the TERMS AND CONDITIONS.
  • Announcements: The RULES may change from time to time at the discretion of THE ORGANIZERS. To receive announcements and be informed of any change in RULES, THE PARTICIPANTS must provide a valid email.
  • Schedule: The CONTEST follows the schedule indicated in the "Phases" tab.
  • Prizes and awards: There is no tangible PRIZE or AWARD for this CONTEST, but the winners will receive a certificate signed by THE ORGANIZERS. Accepting this certificate will constitute a PRIZE acceptance with respect to the TERMS AND CONDITIONS. Therefore, to be eligible to receive the certificate, the winners will have to publicly release their code under a common Open Source license and fill out fact sheets.
  • Dissemination: THE PARTICIPANTS are encouraged to attend the workshop . The workshop is independent from the CONTEST: participating in the CONTEST is not a condition to attend the workshop.
  • Registration as individuals or teams: THE PARTICIPANTS must register to Codalab as individuals and provide a valid email address before they can enter as a TEAM. Any registered participant can start a TEAM by registering it on the "Team" tab. Subsequently, he/she can invite other participants to join his TEAM. Reciprocally, other TEAMS can invite him/her to join them. However, no participant can be a member of several TEAMS. No participant can leave a TEAM without prior consent of THE ORGANIZERS. This consent will be given only in exceptional cases.
  • Submission method: The results must be submitted through this CodaLab competition site. THE PARTICIPANTS can make up to 5 submissions per day in the feedback phase (up to a maximum of 100) and only ONE submission in the validation phase. Using multiple accounts to increase the number of submissions in NOT permitted. The entries must be formatted as specified on the Evaluation page. Further instructions and starting kit are provided on the Data page for USERs having accepted the RULES.

This CONTEST is brought to you by the See4C consortium. Contact THE ORGANIZERS.


Are there prerequisites to enter the challenge?

No, except accepting the TERMS AND CONDITIONS.

Can I enter any time?

You can enter during the feedback phase only. Registration closes at the end of the feedback phase.

Where can I download the data?

From the "Data" page, under the "Participate" tab. You first need to register and accept the TERMS AND CONDITIONS and the RULES to access data.

How do I make submissions?

Register and go to the "Participate" tab where you find data, and a submission form.

Do you provide tips on how to get started?

We provide a Starting Kit in Python with step-by-step instructions in a Jupyter notebook. We also provide some tutorial material and fact sheets on benchmark methods.

Are there prizes?

No. Just kudos! However, there will be travel awards for the best papers submitted to the workshop, which will be held April 22, 2017, in Toulon, France, in conjunction with ICLR.

Do I need to submit code to participate?

Yes, participation is by code submission.

When I submit code, do I surrender all rights to that code to the SPONSORS or ORGANIZERS?

No. You just grant to the ORGANIZERS a license to use your code for evaluation purposes during the challenge. You retain all other rights. However, the winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their PRIZE (i.e. if they accept their award certificate since there are no tangible prizes in the CONTEST). See our TERMS AND CONDITIONS.

If I win, I must submit a fact sheet, do you have a template?

Yes, please download it [HERE].

How much computational power and memory are available?

You are sharing resources with other users on 2 servers with the following specifications:

ComponentNumberTypeTotal cores
CPU 1 E5-2699v3 36 physical / 72 virtual
RAM 256 GB DDR4  
GPU 2 Nvidia Geforce GTX Titan X    6144 CUDA cores

GPUs are now available. If you experience unreasonable delay to get back results from your submissions, please contact us. The PARTICIPANTS will be informed if the computational resources increase. They will NOT decrease.

Can I pre-train a model on my local machine and submit it?

This is not explicitly forbidden, but it is discouraged. We prefer if all calculations are performed on the server. If you submit a pre-trained model, you will have to disclose it in the fact sheets. All data "past" will be available to your program on the server. During the feedback phase, you will have available for training the "public downloadable data" (in data/Xm2) and "training feedback phase data" (in data/Xm1). During the final validation phase, you will have available for training the same data as in the feedback phase plus the data on which you were tested during the feedback phase and additional training data, all in four subdirectories of data/: Xm4, Xm3, Xm2, and Xm1. See the "Data" page for details.

Do I have to re-submit my code in the final validation round?

No. Submissions of the feedback phase will be forwarded automatically to the last round. However, you will have 3 days in the validation phase to make one final submission if you wish, which will overwrite the last submission you made in the feedback phase. During that time, no feedback will be provided on the leaderboard. The results on validation data will only be revealed once the jury has deliberated, and at the latest on the data of the workshop (April 22, 2017).

What is my time budget?

Your execution must run in less than 10 minutes (600 seconds) in the feedback phase and 20 minutes (1200 seconds) in the validation phase. There are twice as many videos to process in the validation phase.

Does the time budget correspond to wall time or CPU time?

CPU time.

My submission seems stuck, how long will it run?

In principle no more than its time budget. We kill the process if the time budget is exceeded. Submissions are queued and run on a first time first serve basis. We are using two identical servers. Contact us if your submission is stuck more than 24 hours. Check on the leaderboard the execution time.

How many submissions can I make?

Five per day during the feedback phase (and up to a total of 100). Only ONE during the final validation phase. Please respect other users. It is forbidden to register under multiple user IDs to gain an advantage and make more submissions. Violators will be DISQUALIFIED FROM THE CONTEST.

Do my failed submissions count towards my number of submissions per day?

Yes. Please contact us if you think the failure is due to the platform rather than to your code and we will try to resolve the problem promptly.

If I under-use my time budget in one phase, can I use it later?


What happens if I exceed my time budget?

Your submission does not get scored. The process gets killed.

The time budget is too small, can you increase it?

We may eventually increase it if the burden on our servers is no too high and we see that this is required to beat baseline results. But do not count on it.

Why are you using RMSE as a metric?

Because of simplicity. Everyone understands RMSE. We are aware that this may not be the best metric for the task. Other metrics will be computed. However the PARTICIPANTS will be ranked with RMSE to determine the winners.

Which version of Python are you using?

The code was tested under Anaconda Python 2.7. We are running Python 2.7 on the server and the same libraries are available. In addition, we also provide a version of Python 3, Octave, Julia, and many other libraries, which are bundled in a docker.

Can I use something else than Python code?

Yes. Any Linux executable can run on the system, provided that it fulfills our interface (see how to call it from the script "" in the starting kit. However, we only prepared a starting kit with Python at this stage and have not tested this option. We also provide an example of submission in Octave.

How do I test my code in the same environment that you are using before submitting?

When you submit code to the See.4C platform using Codalab, your code is executed inside a docker container. This environment can be exactly reproduced on your local machine by downloading the corresponding docker image. The See.4C docker environment contains a large number of pre-loaded programs, including Python 2 and 3 (with libraries such as keras, tensorflow, theano, numpy, scikit-learn), Julia, R, Octave, etc. See for details.
Non GPU users, if you are new to docker, follow these instructions to install docker. GPU users, follow these more detailed instructions.
We will step you through running the starting kit inside the See.4C docker. You can follow a similar procedure to run other code.
If you installed docker in a virtual machine, make sure to start the virtual machine (this will be the case if you have an older Mac and used Docker toolbox; the virtual machine can be launched from the launch pad with “docker quick start terminal” or from the command line with “docker-machine ssh default”). Download and unzip the starting kit from the "Participate" tab. Then copy it to the docker machine.
docker-machine scp see4c_starting_kit default:/home/docker
The run the docker:
docker run -it -p 8888:8888 -v /home/docker:/data see4c/notebook:alpha
Go to a web browser and check that the notebook is running at http://[the_IP_address]:8888/ The_IP_address=localhost OR the IP address of your virtual machine obtained with 'docker-machine ip default'.
Then open README.pynb which is found in the directory data/starting_kit in your web browser. WARNING: the default notebook kernel is Python 3, you’ll have to switch to Python 2.
After running all the cells of README.ipynb, you will get a submission file called 'sample_submission*****.zip' in the directory data/, you can click on it to download and submit it to the website.

Are there publication opportunities?

Yes, we are part of the the workshop we organize, there will be proceedings.

What is meant by "Leaderboard modifying disallowed"?

Your last submission is shown automatically on the leaderboard. You cannot choose which submission to select. If you want another submission than the last one you submitted to "count" and be displayed on the leaderboard, you need to re-submit it.

What is the file called ""?

This is a file that you should have in your submitted bundle to indicate to the platform which program must be executed and how.

Can I register multiple times?

No. If you accidentally register multiple times or have multiple accounts from members of the same team, please notify the ORGANIZERS. Teams or solo PARTICIPANTS with multiple accounts will be disqualified.

How can I create a team?

You must already have registered and joined the competition (this is achieved by going the the "Participate" tab and accepting the rules). A new "Team" tab should appears. Click on the "Team" tab. Include the information of the team, check “Allow requests”, and submit. Before others can join, the organizer of the competition will need to approve your team. The user who creates a team will be the owner/leader of the team (with management privileges). He can accept/reject requests to join the team and revoke members. Warning, you cannot join someone else's team if you have create you own team.

How can I destroy a team?

You cannot. If you need to destroy your team, contact us.

How can I join a team?

You join a team by requesting enrollment in a team already formed. You must already have registered and joined the competition (this is achieved by going the the "Participate" tab and accepting the rules). A new "Team" tab should appears. Click on the "Team" tab. Select the team you want to join. Click on “Request enrolment” and then submit. The leader of the team must approve your request before you are included in the team. Warning, you cannot join someone else's team if you have already created you own team. You cannot join multiple teams.

How can I leave a team?

You cannot. If you need to leave a team, contact us.

Can I give an arbitrary hard time to the ORGANIZERS?

The EXPERT PANEL chair person is:

Hugo Jair Escalante,
Computer Science Department
National Institute of Astrophysics, Optics and Electronics
Luis Enrique Erro num 1, Tonantzintla, 72840, Puebla, Mexico

Where can I get additional help?

For questions of general interest, THE PARTICIPANTS should post their questions to the forum.

This challenge is brought to you by the See4C consortium. Contact the ORGANIZERS.


Start: April 1, 2017, midnight

Description: DEVELOPMENT: Create a predictor and submit the code to the platform.


Start: April 23, 2017, midnight

Description: FINAL: Your LAST submission of the development phase is evaluated on NEW data.

Competition Ends

Nov. 8, 2018, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In