Dear organizers,
I found that many photos in the training dataset are not pixel-wise aligned. Actually, there are different types of misalignment: camera shift, moving objects (e.x. trees, grass). However, the only evaluation metric is weighted PSNR and SSIM. The problem description also contains: "Images for training, validation and testing are captured in the same way with the same set of cameras.". So, it means that this misalignment is also present in the testing data. I wonder, how do you plan to calculate pixel-wise metric (PSNR) on misalignment data? If just simply calculate it, this challenge becomes practically unuseful, because "blurry" pictures will have higher PSNR. Will the testing data be aligned or do I miss something?
Thanks in advance & Looking forward for your reply.
Posted by: ostyakov @ June 4, 2020, 9:40 a.m.Thanks for your question!
Actually, we have carefully prepared this large dataset, including
1: strictly control the process of image capture: fix the tripod, use Bluetooth, and take photos in the scenes without moving objects as possible as we can.
2: carefully and iteratively refine the alignment results many times
3: remove images that are not well-aligned and patches that are smooth regions or blurry, and select patches with rich textures for the training set
4: carefully and manually check the alignment results for each image and avoid the misalignment like those you pointed as possible as we can.
5: conduct extensive experiments with existing SR methods to evaluate this dataset
Overall, we have tried our best to ensure the alignment results and the dataset covers more textures and diverse scenes in the real-world; but it might happen that not all the images are exactly pixel-wise alignment because it is an extremely difficult task to exactly align images. Besides, we also trained many state-of-the-art SR methods and explored new resolutions on this dataset for different scale factors. When testing those trained models, those extensive experimental results demonstrate a promising performance for real SR and appealing generalization for realistic applications. That is, our efforts for the alignment are reliable and the effect of potential misalignment is very trivial. Particularly, we also carefully check the validation and test images again for these tracks. They are well aligned and no blurry images are included.
Posted by: pengxu @ June 4, 2020, 12:34 p.m.Thank you for the detailed reply.
However, looking at the dataset, I found that there are very large shifts in some crops. For example, 000012, 000016, 000018, 000021.
There is also a color mismatch sometimes between LR and HR: for example 000022.
So, could we expect that the testing dataset will not have such misalignments, and colors between LR and HR image will be the same?
Posted by: ostyakov @ June 4, 2020, 3:59 p.m.As claimed above, we carefully check the testing images again.
They are well aligned and do not have such misalignments.
For the color, we also conducted the alignment between images globally, but it might have slightly inconsistent cases locally sometimes.