AIM 2019 Video Temporal Super-Resolution Challenge Forum

Go back to competition Back to thread list Post in this thread

> Use of the validation data - evaluation only and not for training

Dear participants,

We provide the validation GT for the participants to get direct feedback of their solution, measuring PSNR and SSIM of the developed methods.
Because the CodaLab submission could handle only subsampled part of the data, the submission during the development phase does not reflect the full performance.
Participants can now get feedback locally using the full validation GT.

However, please note that we are trying to avoid the solutions overfit to any kind of data including the validation set.
Please do not use validation data as your training data.
The awarding committee will favor the solutions that don't use validation data for training purpose.

All the best,
Seungjun, Sanghyun, Radu

Posted by: SeungjunNah @ Sept. 3, 2019, 12:07 p.m.

As you mentioned, validation and test submission in codalab doesn't directly show the performance of the solution because it only takes 30 fps results.
So, I'm wondering if the result of full images (30fps and 60fps) which are submitted by e-mail affect to the rank.
If it does, how does it affect to the rank? Is it for objective performance or subjective performance?
Thank you.

Posted by: BumjunPark @ Sept. 3, 2019, 12:35 p.m.

Dear BumjunPark,

The final ranks will be decided by the email full submission (60fps).
PSNR and SSIM are the primary objective evaluation metrics.

We initially wanted the 60fps data to be evaluated on CodaLab but couldn't do so due to the large size of the data.
That's why we are getting subsampled submissions on CodaLab to give at least partial feedback during the development phase.
We are providing full validation GT and collecting full final test results by email to evaluate on the complete set.

Posted by: SeungjunNah @ Sept. 3, 2019, 1:04 p.m.

Dear organizers,

Will the last rank be affected by any subjective metric?


Posted by: Siyao @ Sept. 4, 2019, 3:57 a.m.

Dear Siyao,

The ranks will be determined by PSNR and SSIM.
The challenge winning award will go to the team with the highest quantitative objective.

To be in the ranks, reproducibility is a must and source code or executable should be submitted together.
Factsheets should describe the methods' originality and how it works.

In case there are substantially impressive/interesting characteristics from other solutions, there could be additional awards. The additional awarding criterion will be determined by the awarding committee.

Posted by: SeungjunNah @ Sept. 4, 2019, 8:31 a.m.
Post in this thread