NTIRE 2020 Real World Super-Resolution Challenge - Track 1: Image Processing artifacts

Organized by Radu - Current server time: April 17, 2025, 6:28 a.m. UTC

Previous

Testing
March 16, 2020, 3 a.m. UTC

Current

Development
Dec. 23, 2019, 11:59 p.m. UTC

End

Competition Ends
March 26, 2020, 11:59 p.m. UTC

NTIRE Workshop and Challenges @ CVPR 2020

Post Challenge Information

 

Challenge

Track

Training Source

Training Target

Validation Input

Validation Ground Truth

AIM19

Common degradations and Compression

Noisy Images

Clean Images

Noisy Images

Clean Images

NTIRE20

Image Processing Artifacts

Noisy Images

Clean Images*

Noisy Images

Clean Images

NTIRE20

Smartphone Images

Noisy Images

(Same as *)

Noisy Images

No GT (explanation Fig. 1)

 Scoring

> python Measure.py -dirA ./path/to/validation-images -dirA ./path/to/your/result-images

 

AIM 2019

[Challenge Report]

AIM19 data teaser

 

Validation Set Track 2

Method

PSNR

SSIM

LPIPS

Bicubic

22.36

0.614

0.673

MadDeamon (Winner)

21.00

0.504

0.403

 

Data NTIRE20

[Challenge Report]

Track 1: Image Processing artifact

NTIRE20 track1 data teaser

Validation Set Track 1

Method

PSNR

SSIM

LPIPS

Bicubic

25.52

0.671

0.632

Impressionism

(Winner)

24.83

0.672

0.227

 

Track 1: Smartphone

NTIRE20 track2 data teaser

Only Visuals, see report.

 

 

 

The 5th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held June 15, 2020 in conjunction with CVPR 2020, Seattle, US. Jointly with NTIRE workshop, we organize a variety of NTIRE challenges within image and video enhancement and restoration. Among these is the NTIRE Challenge on Real World Super-Resolution.

Real-World Super-Resolution Challenge

Why do most Super-Resolution fail on images straight from the camera?

In the most common strategy for learning super-resolution models, images are first downscaled in order to create corresponding low and high-resolution training pairs. However, as a consequence, the resulting low-resolution image is clean and almost noise free. Therefore the input of the model has never seen noise. This often leads to dramatic artifacts when the method is applied to images that come straight from the camera.

We organize a challenge to stimulate new research and improve the state-of-the-art in the emerging area of Real World Super-Resolution. In this setting, there exist no ground-truth reference images that can be directly employed for training. Instead, the model must be learned from only a set of source domain images, originating from for instance a particular camera sensor. The challenge contains two tracks, investigating different types of source domains. 

Get inspired by our last year's competition and come up with your own idea to super-resolve images above camera resolution.

Understand last year’s approaches.

Challenge Tracks

Track 1: Image Processing artifacts

Almost all image collections on the web, from companies or private people, are stored after they were enhanced using image processing operations. Unfortunately, this makes most super-resolution methods produce strong artifacts. The reason for that is that those methods only ever see clean images as input. This comes from the way the training data is generated with bicubic downsampling.

The goal of this challenge is to super-resolve images from the Source Domain to the Target Domain as shown in the Figure above. The Source Domain consists of images containing artifacts produced by a simple denoising algorithm. Not only should those images be super-resolved with factor 4, but they should also have a clean high-quality appearance. For this purpose, we provide an additional set of unpaired high-quality reference images, that defines the clean target domain.

In this track, the degraded low-resolution images were generated from high-resolution images, which enables the computation of reference-based quality metrics, such as PSNR and SSIM. The employed degradation process  should not be modeled or replicated explicitly. However, it could be learned by a generic network. The method must generalize to other unknown image degradations or natural image characteristics. PSNR and SSIM metrics are reported for reference in the leaderboard. However, note that methods with a better perceptual quality tend to have a worse PSNR/SSIM. The final evaluation will be performed in terms of perceptual quality using the mean opinion score (MOS).

Download the data and get started.

Track 2: Smartphone Images

The spatial and cost constraints for smartphone cameras lead to images with low quality. An often observed phenomena is a high noise level for low light conditions, as the sensors are not large enough to capture enough light. This is a condition which is adversarial to most super-resolution methods. The reason for this is a mismatch in input distribution of the method.

The goal of this challenge is to super-resolve smartphone images as shown in the Figure above. The Source Domain consists of images containing artifacts originating from image enhancement operations of the smartphone. Not only should those images be super-resolved with factor 4, but they should also have a clean high quality appearance. For this purpose we provide an additional set of unpaired high quality reference images, that defines the clean target domain.

As the images come straight from the camera, a ground truth with 4x the resolution does not exist. Therefore we do not have a leaderboard for this track, but the model can be trained for track 1 as well and quantitatively evaluated there. The final score is evaluated by conducting a user study, in which participants are asked to determine the overall perceptual image quality.

Download the data and get started.

Difficulties of Real-World Super-Resolution

  • Deep Learning is example based, if you show examples of a different domain during training than your test domain, you get artifacts.

  • Methods with better perceptual quality can have a worse PSNR.

  • Benchmarking Real-World Super-Resolution requires a special setup, due to the missing ground truth.

Understand the theory behind.

Important Information

  • 2019.12.20 Release of train data (input and output images) and validation data (only input)

  • 2019.12.20 Validation server online

  • 2020.03.16 Final test data release (only input images)
  • 2020.03.26 Test output results submission deadline (EXTENDED!)
  • 2020.03.26 Fact sheets and code/executable submission deadline
  • 2020.03.28 Preliminary test results release to the participants
  • 2020.04.09 Paper submission deadline for entries from the challenge (EXTENDED!)
  • 2020.06.15 NTIRE workshop and challenges, results and award ceremony (CVPR 2020, Seattle, US)

Note that for the final ranking and challenge winners we are weighing more the teams/participants with entries in more than one track challenge. Ideally each participant will have entries for both tracks.

NTIRE Workshop Overview

The 5th edition of NTIRE: New Trends in Image Restoration and Enhancement workshop will be held June 15, 2020 in conjunction with CVPR 2020, Seattle, US.

Image manipulation is a key computer vision tasks, aiming at the restoration of degraded image content, the filling in of missing information, or the needed transformation and/or manipulation to achieve a desired target (with respect to perceptual quality, contents, or performance of apps working on such images). Recent years have witnessed an increased interest from the vision and graphics communities in these fundamental topics of research. Not only has there been a constantly growing flow of related papers, but also substantial progress has been achieved.

Each step forward eases the use of images by people or computers for the fulfillment of further tasks, as image manipulation serves as an important frontend. Not surprisingly then, there is an ever growing range of applications in fields such as surveillance, the automotive industry, electronics, remote sensing, or medical image analysis etc. The emergence and ubiquitous use of mobile and wearable devices offer another fertile ground for additional applications and faster methods.

This workshop aims to provide an overview of the new trends and advances in those areas. Moreover, it will offer an opportunity for academic and industrial attendees to interact and explore collaborations.

Jointly with NTIRE workshop we have an NTIRE challenge on Real World Super-Resolution, that is, the task of super-resolving (increasing the resolution) an input image from an image domain based on a set of prior examples of source and target domain images. The challenge has two tracks.

Provided Resources

  • Scripts: With the dataset the organizers will provide scripts to facilitate the reproducibility of the images and performance evaluation results after the validation server is online. More information is provided on the data page.

  • Contact: You can use the forum on the data description page (highly recommended!) or directly contact the challenge organizers by email (Martin.Danelljan [at] vision.ee.ethz.ch, Andreas.Lugmayr [at] vision.ee.ethz.ch and Radu.Timofte [at] vision.ee.ethz.ch) if you have doubts or any question.

Data

Overview

We are making available a large dataset Image Processing artifact (IPA) derived from Flickr2K and DIV2K of images with a large diversity of contents.

The dataset contains images from the source domain of the input images and images defining a target domain.

The target domain images are important, the super-resolved output images should belong to the target domain.

 

The dataset is divided into:

  • train data

  • validation data, the input source images are provided from the beginning of the challenge and are meant for the participants to get online feedback from the validation server.

  • test data, the participants will receive the input source images when the final evaluation phase starts and the results will be announced after the challenge is over and the winners are decided.

Data access

By accessing the data the participants implicitly agree with the terms and conditions of the challenge.

Track 1: Image Processing artifact

 

Training

Source Domain: Image Processing artifact

Target Domain: Clean High Quality Image

Validation

Source Domain: Image Processing artifact

! The final goal is not PSNR, but perceptual quality !

Understand the tradeoff.

Testing

Source Domain: Image Processing artifact

Track 2: Smartphone Images

Training

Source Domain: Smartphone Images

Target Domain: Clean High Quality Image

Validation

Source Domain: Smartphone Images

! There is no ground truth to smartphone images that were upsampled 4x !

To compare your method to other participants on the leaderboard, use the date from Track 1 and submit it there during validation phase.

Testing

Source Domain: Smartphone Images

Scoring scripts

The Matlab scoring functions used for the evaluation of the solutions are available at:

https://competitions.codalab.org/my/datasets/download/ebe960d8-0ec8-4846-a1a2-7c4a586a7378

 

 

NTIRE Workshop and Challenges @ CVPR 2020

 

Real World Super-Resolution Challenge

 

Evaluation

The evaluation consists of the comparison between the x4 super-resolved images with the ground truth images, when available. We evaluate perceptual quality for both tracks using the Mean Opinion Score (MOS). However, since MOS requires opinions from human subjects it will be conducted only on the final test images, for the final ranking. Meanwhile, we will report the standard Peak Signal To Noise Ratio (PSNR) and the Structural Similarity (SSIM) index as often employed in the literature when a groundtruth reference is available. Implementations are found in most of the image processing toolboxes. For each dataset we report the average results over all the processed images belonging to it.

For submitting the results, you need to follow these steps:

  1. process the input images and keep the same name for the output image results as produced by your method (example: for an input file with name "083.png" the output file should be "083.png") 
    Note that the output images should be saved with lossless compression and should have the pixel size of the input images x16.
  2. create a ZIP archive containing all the output image results named as above and a readme.txt Note that the archive should not include folders, all the images/files should be in the root of the archive.
  3. the readme.txt file should contain the following lines filled in with the runtime per image (in seconds) of the solution, 1 or 0 accordingly if employs CPU or GPU at runtime, and 1 or 0 if employs extra data for training the models or not.
    runtime per image [s] : 10.43 
    CPU[1] / GPU[0] : 1
    Extra Data [1] / No Extra Data [0] : 1
    Other description : Solution based on A+ of Timofte et al. ACCV 2014. We have a Matlab/C++ implementation, and report single core CPU runtime. The method was trained on Train 91 of Yang et al. and BSDS 200 of the Berkeley segmentation dataset. 
     The last part of the file can have any description you want about the code producing the provided results (dependencies, link, scripts, etc.)
    The provided information is very important both during the validation period when different teams can compare their results / solutions but also for establishing the final ranking of the teams and their methods.

New Trends in Image Restoration and Enhancement (NTIRE) challenge on real world super-resolution @ CVPR 2020

 

Real World Super-Resolution Challenge

These are the official rules (terms and conditions) that guvern how the NTIRE challenge on real world super-resolution 2020 will operate. This challenge will be simply reffered to as the "challenge" or the "contest" throghout the remaining part of these rules and may be named as "NTIRE" or "RWSR" benchmark, challenge, or contest, elsewhere (our webpage, our documentation, other publications).

In these rules, "we", "our", and "us" refer to the organizers (Radu Timofte, Martin Danelljan and Andreas Lugmayr from ETH Zurich, CVL) of NTIRE challenge and "you" and "yourself" refer to an eligible contest participant.

Note that these official rules can change during the contest until the start of the final phase. If at any point during the contest the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsability of the participant to send an email to the organizers (Martin.Danelljan [at] vision.ee.ethz.ch, Andreas.Lugmayr [at] vision.ee.ethz.ch and Radu.Timofte [at] vision.ee.ethz.ch) such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.

1. Contest description

This is a skill-based contest and chance plays no part in the determination of the winner (s).

The goal of the contest is to super-resolve an input image to an output image with a magnification factor x4 and the challenge is called real world super-resolution.

Focus of the contest: it will be made available a newly compiled dataset of at least ??0 images adapted for the specific needs of the challenge. The images have a large diversity of contents. We will refer to this dataset, its partition, and related materials as RWSR (Real World Super-Resolution dataset). The dataset is divided into training, validation and testing data. We focus on two distinct settings:  (track 1: realistic)  real world super-resolution where the aim is to obtain output image results perceptually similar to a clean target domain of the images, while the input domain is obtained from a clean dataset with a set of corruptions unknown to the participants and  (track 2: real)  similar to track 1, only the input / source domain is defined by images from a real camera sensor of poor quality. The participants will not have access to the ground truth images from the test data. For each track the ranking of the participants is according to the performance of their methods on the test data. The participants will provide descriptions of their methods, details on (run)time complexity and (extra) data used for modeling. The winners will be determined according to their entries, the reproducibility of the results and uploaded codes or executables, and the above mentioned criteria as judged by the organizers.

2. Tentative contest schedule

The registered participants will be notified by email if any changes are made to the schedule. The schedule is available on the NTIRE workshop web page and on the Overview of the Codalab competition.

3. Eligibility

You are eligible to register and compete in this contest only if you meet all the following requirements:

  • you are an individual or a team of people willing to contribute to the open tasks, who accepts to follow the rules of this contest
  • you are not an NTIRE challenge organizer or an employee of NTIRE challenge organizers
  • you are not involved in any part of the administration and execution of this contest
  • you are not a first-degree relative, partner, household member of an employee or of an organizer of NTIRE challenge or of a person involved in any part of the administration and execution of this contest

This contest is void wherever it is prohibited by law.

Entries submitted but not qualified to enter the contest, it is considered voluntary and for any entry you submit NTIRE reserves the right to evaluate it for scientific purposes, however under no circumstances will such entries qualify for sponsored prizes. If you are an employee, affiliated with or representant of any of the challenge sponsors then you are allowed to enter in the contest and get ranked, however, if you will rank among the winners with eligible entries you will receive only a diploma award and none of the sponsored money, products or travel grants.

NOTE: industry and research labs are allowed to submit entries and to compete in both validation phase and final test phase. However, in order to get officially ranked on the final test leaderboard and to be eligible for awards the reproducibility of the results is a must and, therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

We will have 3 categories of entries in the final test ranking:
1) checked with publicly released codes 
2) checked with publicly released executable
3) unchecked (with or without released codes or executables)

 

4. Entry

In order to be eligible for judging, an entry must meet all the following requirements:

Entry contents: the participants are required to submit image results and code or executables. To be eligible for prizes, the top ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code or executables online accessible for a period of not less than one year following the end of the challenge (applies only for top three ranked participants of the competition). To enter the final ranking the participants will need to fill out a survey (fact sheet) briefly describing their method. All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the NTIRE Workshop and Challenges (to be held on June 15, Seattle, US). To be eligible for prizes, the participants score must improve the baseline performance provided by the challenge organizers.

Use of data provided: all data provided by NTIRE are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. AIM and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains in property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify AIM and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.

  • Test data: The organizers will use the test data for the final evaluation and ranking of the entries. The ground truth test data will not be made available to the participants during the contest.
  • Training and validation data: The organizers will make available to the participants a training dataset with ground truth images and a validation dataset without ground truth images. At the start of the final phase the test data without ground truth images will be made available.
  • Post-challenge analyses: the organizers may also perform additional post-challenge analyses using extra-data, but without effect on the challenge ranking.
  • Submission: the entries will be online submitted via the CodaLab web platform. During development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final evaluation will be computed on the test data submissions and the final scores will be released after the challenge is over.
  • Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge: - your entry is your own original work; and - your entry only includes material that you own, or that you have permission to use.

5. Potential use of entry

Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:

Are granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:

  1. Use, review, assess, test and otherwise analyze results submitted or produced by your code or executable and other material submitted by you in connection with this contest and any future research or contests by the organizers; and
  2. Feature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);

Agree to sign any necessary documentation that may be required for us and our designees to make use of the rights you granted above;

Understand and acknowledge that us and other entrants may have developed or commissioned materials similar or identical to your submission and you waive any claims you may have resulting from any similarities to your entry;

Understand that we cannot control the incoming information you will disclose to our representatives or our co-sponsor’s representatives in the course of entering, or what our representatives will remember about your entry. You also understand that we will not restrict work assignments of representatives or our co-sponsor’s representatives who have had access to your entry. By entering this contest, you agree that use of information in our representatives’ or our co-sponsor’s representatives unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law;

Understand that you will not receive any compensation or credit for use of your entry, other than what is described in these official rules.

If you do not want to grant us these rights to your entry, please do not enter this contest.

6. Submission of entries

The participants will follow the instructions on the CodaLab website to submit entries

The participants will be registered as mutually exclusive teams. Each team is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.

The participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.

7. Judging the entries

The board of NTIRE will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization. A list of the judges will be made available upon request. The judges will review all eligible entries received and select three winners for each of the two competition tracks based upon the prediction score on test data. The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

The decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

8. Prizes and Awards

The financial sponsors of this contest are listed on NTIRE 2020 workshop web page . There will be economic incentive prizes and travel grants for the winners (based on availability) to boost contest participation; these prizes will not require participants to enter into an IP agreement with any of the sponsors, to disclose algorithms, or to deliver source code to them. The participants affiliated with the industry sponsors agree to not receive any sponsored money, product or travel grant in the case they will be among the winners.

Incentive Prizes for each track competitions (tentative, the prizes depend on attracted funds from the sponsors)

  • 1st place: ?00$ + ?GPU + award certificate
  • 2nd place: ?00$ + ?GPU + award certificate
  • 3rd place: ?00$ + award certificate

9. Other Sponsored Events

Publishing papers is optional and will not be a condition to entering the challenge or winning prizes. The top ranking participants are invited to submit a maximum 8-pages paper (CVPR 2020 author rules) for peer-reviewing to NTIRE workshop.

The results of the challenge will be published together with NTIRE 2020 workshop papers in the 2020 CVPR Workshops proceedings.

The top ranked participants and participants contributing interesting and novel methods to the challenge will be invited to be co-authors of the challenge report paper which will be published in the 2020 CVPR Workshops proceedings. A detailed description of the ranked solution as well as the reproducibility of the results are a must to be an eligible co-author.

10. Notifications

If there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified at the email they provided with the registration.

Within seven days following the determination of winners we will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.

The prize such as money, product, or travel grant will be delivered to the registered team leader given that the team is not affiliated with any of the sponsors. It's up to the team to share the prize. If this person becomes unavailable for any reason, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.

If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If you are a potential winner and are a minor in your place of residence, and we require that your parent or legal guardian will be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity and liability/publicity release on your behalf. If you, (or your parent/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you (or the designated parent/legal guardian) and select an alternate selected winner.

 


The terms and conditions are inspired by and use verbatim text from the `Terms and conditions' of ChaLearn Looking at People Challenges and of the NTIRE 2017, 2018, and 2019 challenges .

NTIRE Workshop and Challenges @ CVPR 2020

 

Organizers

 

The NTIRE challenge on real world super-resolution is organized jointly with the NTIRE 2020 workshop. The results of the challenge will be published at NTIRE 2020 workshop and in the CVPR 2020 Workshops proceedings.

 

Marting Danelljan (martin.danelljan [at] vision.ee.ethz.ch), Andreas Lugmayr (andreas.lugmayr [at] vision.ee.ethz.ch) and Radu Timofte (Radu.Timofte [at] vision.ee.ethz.ch) are the contact persons and direct managers of the NTIRE challenge.

 

More information about NTIRE 2020 workshop and challenge organizers is available here: http://www.vision.ee.ethz.ch/ntire20/

Development

Start: Dec. 23, 2019, 11:59 p.m.

Testing

Start: March 16, 2020, 3 a.m.

Competition Ends

March 26, 2020, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In