AIM 2019 Constrained Super-Resolution Challenge - Track 1: Parameters optimization Forum

Go back to competition Back to thread list Post in this thread

> What is the criterion for the final ranking?

I found that many models on the leaderboard have higher parameters and time than baseline. Of course, the corresponding PSNR is also higher than the baseline, but does this meet the competition requirements? If there are models with lower parameters and time than baseline, and PSNR is higher than baseline but lower than the above models, what is the final ranking strategy?

Posted by: WangChaofeng @ Aug. 22, 2019, 1:55 a.m.

I refer to the challenge description:
"
Track 1: Parameters, the aim is to obtain a network design / solution with the lowest amount of parameters while being constrained to maintain or improve the PSNR result and the inference time (runtime) of MSRResNet (Ledig et al, 2017 & Wang et al, 2018).

Track 2: Inference, the aim is to obtain a network design / solution with the lowest inference time (runtime) on a common GPU (ie. Titan Xp) while being constrained to maintain or improve over MSRResNet (Ledig et al, 2017 & Wang et al, 2018) in terms of number of parameters and the PSNR result.

Track 3: Fidelity, the aim is to obtain a network design / solution with the best fidelity (PSNR) while being constrained to maintain or improve over MSRResNet (Ledig et al, 2017 & Wang et al, 2018) in terms of number of parameters and inference time on a common GPU (ie. Titan Xp).
"
Therefore, since the validation data is the same for all the tracks I suspect that the participants are submitting whatever they have to any of the tracks' validation servers for a feedback.

So, the participants need to improve in one of the 3 categories (inference time, PSNR fidelity, number of parameters) over MSRResNet reference model while not getting worse than MSRResNet in the other two categories.

Posted by: Radu @ Aug. 22, 2019, 2:49 p.m.
Post in this thread