To my knowledge of this contest, we aim to super-resolve the low-resolution HSI with a factor of 3.
However, we notice lr2 data is only provided in testing.zip.
Typically, applying super-resolution algorithm in x2 case would gain greater performance compared with x3 case.
To achieve higher score, the competitors will definitely tend to use the lr2 images to generate final result, which seems violate the contest rule.
Nevertheless, given a submitted HSI, we cannot tell whether it is generated by the corresponding lr2 or lr3.
So I was wondering is that legal to use the lr2 images to produce the final result? (i.e. super-resolving with a factor of 2)
Sorry for the typo,
"However, we notice lr2 data is only provided in testing.zip. " in my last post should be
"However, we notice lr2 images are also provided in testing.zip. "
Another major concern is that the performance metrics scores of each testing image are still accessible to the competitors.
It means we can use different algorithms to submit multiple results, then select the best combinations to overfit the testing set.
This selective combination trick is actually impractical in real world, which should also be forbidden in this contest.
lr2 and lr3 are included in testing_lr to be consistent with validation_lr and training_lr.
Also, I have deactivated the leaderboard results after making sure the scoring program is working. However, the point of the ranking is to qualify for paper submission and just like a normal manuscript submission, the assumption is that the authors follow the honour system and are honest about the results because usually, the heavy consequences of "cheating" in manuscript preparation should be deterrent enough.
Posted by: mehrdad.shoeiby @ Aug. 16, 2018, 4:55 a.m.Thank you for your clarification.
I will submit x3 spatially super-resolved result in my final submission.
And I was wondering how we can submit our code and factsheet (is that the same as submit testing result?)
Each participating team in the final test phase needs to submit the test results through Codalab (the latest submission is considered for the evaluation) and, in addition, each participating team should send by email the filled in factsheet describing their entry in the challenge.
Posted by: mehrdad.shoeiby @ Aug. 16, 2018, 5:38 a.m.