Under-display Camera (UDC) is a new imaging system that mounts a display screen on top of a traditional digital camera lens. Such a system has mainly two advantages. First, it follows a new product trend of full-screen devices with a larger screen-to-body ratio, which can provide better user perceptive and intelligent experience. Without seeing the bezel and extra buttons, users can easily access more functions by directly touching the screen. Second, it provides a better human-computer interaction. By putting the camera in the center of the display, it enhances teleconferencing experiences with perfect gaze tracking, and it is increasingly relevant for larger display devices such as laptops and TVs.
Unlike pressure or fingerprint sensors can be more easily integrated into a display, the imaging sensor is relatively hard to maintain its functions after being mounted behind a display. The imaging quality of a camera will be severely degraded due to lower light transmission rates and diffraction effects. As a result, images captured will be noisy and blurry. Therefore, while bringing better user experience and interaction, UDC may sacrifice the quality of photography, face processing, and other downstream vision tasks.
Enhancing the degraded images can be better addressed by learning-based image restoration approaches. Recently, deep restoration models have achieved great performance on image processing applications such as de-noising, de-blurring, de-raining, de-hazing, super-resolution, and light-enhancement. Working on synthesis data with single degradation type, existing models can be hardly utilized to enhance real-world low-quality images with complicated and combined degradation types. To address complicated real degradation using learning-based methods, collecting real paired data or synthesizing near-realistic data by fully understanding the degradation model is necessary.
We hold this image restoration challenge in conjunction with RLQ'20 Workshop which will be held on ECCV'20. We are seeking an efficient and high-performance image restoration algorithm to be used for recovering under-display camera images. We have two tracks and participants are encouraged to submit results on both of them, but only attending one track is also acceptable.
Track 1: "T-OLED" the aim is to obtain restored RGB images with the highest PSNR from low-quality T-OLED UDC images.
Track 2: "P-OLED" the aim is to obtain restored RGB images with the highest PSNR from low-quality P-OLED UDC images.
The top-ranked participants in each track will be awarded in prize and invited to follow the ECCV submission guide for workshops to describe their solution and to submit to the associated RLQ workshop at ECCV 2020. More details can be found in the data section of the competition.
Please feel free to use the 'Forum' to discuss related topics. You could also send emails to Yuqian Zhou (zhouyuqian133 AT gmail.com) with title 'UDC Challenge Inquiry'.
Microsoft Applied Science Groups, and some online resources from NTIRE 2020 Challenge Websites for organizing reference.
Participants are provided with aligned paired training data: Display-free images, and display-covered images. The goal is to restore the display-free image from a display-covered degraded image.
The evaluation consists of the comparison between the restored images with the ground truth display-free images. We will report the standard Peak Signal To Noise Ratio (PSNR) and the Structural Similarity (SSIM) index as often employed in the literature. Implementations are found in most of the image processing toolboxes. The final results will be ranked by PSNR.
All the data used in this challenge can only be utilized for research purposes. It is strictly prohibited to apply the data to any commercial products. The copyright and all the rights belong to the Applied Science Group (ASG) of Microsoft.
You are eligible to register and compete in this contest only if you meet all the following requirements:
This contest is void wherever it is prohibited by law.
industry and research labs are allowed to submit entries and to compete in both the validation phase and the final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.
Ph.D. student winners of the challenge may have potential Microsoft Research Internship Opportunities based on the interview.
Yuqian Zhou (UIUC), Tim Large (Microsoft), Sehoon Lim (Microsoft) and Neil Emerton (Microsoft)
Start: April 6, 2020, midnight
Start: July 10, 2020, midnight
July 21, 2020, midnight
You must be logged in to participate in competitions.
Sign In