We have been receiving several questions about the usage of validation data and the pre-trained models from the validation data.
Technically, we cannot force someone not to use the validation GT for whatever the purpose as the data has been publicly available from NTIRE 2019.
However, using the validation data for training does not follow the basic idea of having a validation set.
Our main goal of hosting competitions and making the dataset public is to pursue the good for the community.
Validation data exists to let people
1) validate the effectiveness of their methods and the modifications on the data outside the training set.
2) compare their methods without using the test set.
If validation data is used for training, we have no (or little) data to check the generalizability of the developed methods.
If someone chooses to construct their own validation set, the comparison with the other methods gets complicated.
As the environment for training/validation is different, it is difficult to perform scientific analyses between them.
To compare such methods in a fair manner, additional effort must be done: one needs to retrain all the methods in a unified environment.
The community cannot get concrete knowledge without the 3rd party effort.
We understand that such concerns wouldn't have been an issue if we did not release the validation GT.
However, CodaLab online server provides limited power to evaluate on 1/10 of the validation set.
Thus, we decided to release the validation GT to let people analyze their solutions on their own, not to encourage them to use it for training.
We keep our statement: we don't recommend using the validation data for training for the good of the community.
Should a pre-trained model (trained from the validation data) be used, the model can be trained from scratch using the training data and then be used for further development.