PIRM 2018 Spectral Image Super-Resolution Challenge- Track 1

Organized by mehrdad.shoeiby - Current server time: Dec. 10, 2018, 1:08 p.m. UTC

First phase

Development
June 8, 2018, 11:59 p.m. UTC

End

Competition Ends
Aug. 17, 2018, 11:59 p.m. UTC

The PIRM 2018 Challenge on Example-based Spectral Image Super-resolution

 

Important dates

  • June 8, 2018, Release of the relevant datasets
  • July 9, 2018, Validation server online
  • August 17, 2018, Test results and fact sheets submission deadline
  • August 18, 2018, Code submission deadline
  • August 22 2018, Final test results released to the participants.
  • August 29, 2018, Paper submission deadline for top entries from the challenge
  • September 14, 2018, PIRM workshop and associated challenges, results and award ceremony (ECCV 2018, Munich, Germany)

Challenge overview

The PIRM workshop will be held on September 14, 2018, in conjunction with ECCV 2018 in Munich, Germany.

Image super-resolution is a classical problem which has found application in areas such as video processing, light field imaging and image reconstruction. As applied to spectral imaging, super-resolution is a key task to improve the spatial resolution of imaging spectroscopy data. Moreover, despite the major advantages of new spectral imaging sensors with filters fully integrated into the CMOS or CCD detector, one of their main drawbacks is the low raw spatial resolution per wavelength-indexed band in the image. Thus, super-resolution is an important means to a much improved spatial resolution in these devices.

A key goal in image restoration, manipulation and generation, is to produce images that are visually appealing to human observers. In recent years, there has been great interest as well as significant progress in perceptually-aware computer vision algorithms. However, many works have observed a fundamental disagreement between this recent leap in performance, as evaluated by human observers, and the objective assessment of these methods by common evaluation metrics (e.g. PSNR, SSIM). This workshop will revolve around two main themes: (i) How to design algorithms which satisfy human observers, and (ii) How to evaluate the perceptual quality of such algorithms.

This challenge is motivated by the notion that, by using machine learning techniques, single image super-resolution systems can be trained so as to obtain reliable multispectral super-resolved images at testing. Track 1 (spectral image super-resolution) focuses on to the problem of super-resolving the spatial resolution of spectral images given training pairs of low and high spatial resolution spectral images. 

Track 1: “Spectral Image Super-Resolution”

In this track, the aim is to obtain 3x spatially super-resolved spectral images making use of training imagery which was down-sampled with a bicubic kernel.
For all the spectral images we compute the corresponding pseudocolour (RGB) image making use of the CIE 2-degree colour matching functions. We remit the interested reader to the paper of [Fairman et al.] for a motivation on the principles for the CIE colorimetry system used here.

All submitted images will be evaluated with respect to two criteria. The first of these concerns the fidelity of the reconstruction of the spectra in the super-resolved spectral images. The second applies to the perceptual evaluation of the pseudocolour images at testing (the colour analogue of the spectral imagery).

The top ranked participants on each track will be awarded and invited to follow the ECCV 2018 submission guide for workshops to describe their solution and to submit to the associated PIRM workshop at ECCV 2018.

Note that for the final ranking and challenge winners we are weighing more the teams/participants with entries in more than one track. Ideally each participant will have entries for both tracks.

Competitions

To learn more about each competition, and to participate in the challenge everybody is invited to register.

The relevant training, validation and testing data will be made available to the registered participants.

Provided Resources

  • Scripts: With the dataset the organizers will provide scripts to facilitate the reproducibility of the images and performance evaluation results after the validation server is online. More information is provided on the data page.
  • Contact: You can use the forum on the data description page (highly recommended!) or directly contact the challenge organizers by email (Antonio.Robles-Kelly [at] data61.csiro.au, Mehrdad.Shoeiby [at] data61.csiro.au and Radu.Timofte [at] vision.ee.ethz.ch) if you have doubts or any question.

The PIRM 2018 Challenge on Example-based Spectral Image Super-resolution

 

Evaluation

The evaluation consists from the comparison of the restored hyperspectral images (and their pseudocolour in the case of Track 2) with their corresponding ground truth. For this we use the mean of relative absolute error (MRAE), the Spectral Information Divergence (SID) and the mean observer score (MOS) as our competition metrics. The average per-band mean squared error (MSE), the average per-pixel spectral angle, the average per-image structural similarity index (SSIM) and the mean per-image peak signal-to-noise ratio (PSNR) will be reported as well, but will not affect submission rankings. For each dataset we report the average results over all the processed images belonging to it.

For submitting the results, you need to follow these steps:

  1. Process the input images and produce a corresponding estimated hyperspectral image and, for Track 2, the colour image. Save the testing output spectral image in ENVI standard  format (this may be done using the matlab scripts provided with the datasets) and, for Track 2, the colour images in PNG format. Note that all the specral images should be saved with both the Flat (fla) and Header (hdr) files included. The file name should be identical to the input image filename, with the "_XXX" suffix substituted by "tr1" or "tr2" depending on whether the results correspond to Track 1 (tr1) or Track 2 (tr2). For instance, the Flat file for Image 221 in the testing set of Track 1 named image_221_lr2.fla should be saved as image_221_tr1.fla
  2. Create a ZIP archive containing all the image results named as above. Note that the archive should not include folders, all the images/files should be in the root of the archive.

 

The PIRM 2018 Challenge on Example-based Spectral Image Super-resolution

 

Terms and Conditions

These are the official rules (terms and conditions) that govern how the PIRM challenge on example-based spectral image super-resolution will operate. This challenge will be simply referred to as the "challenge" or the "contest" throughout the remaining part of these rules and may be named as "PIRM" or spectral benchmark, challenge, or contest, elsewhere (our web page, our documentation, other publications).

In these rules, "we", "our", and "us" refer to the organizers of the PIRM challenges and "you" and "yourself" refer to an eligible contest participant.

Note that these official rules can change during the contest until the start of the final phase. If at any point during the contest the registered participant considers that can not anymore meet the eligibility criteria or does not agree with the changes in the official terms and conditions then it is the responsibility of the participant to send an email to the organizers (merhdad.shoeiby [at] data61.csiro.au and antonio.robles-kelly [at] data61.csiro.au and radu.timofte [at] vision.ee.ethz.ch) such that to be removed from all the records. Once the contest is over no change is possible in the status of the registered participants and their entries.

1. Contest description

This is a skill-based contest and chance plays no part in the determination of the winner (s).

The goal of the contest is to obtain a high spatial resolution image from a low spatial resolution input image and the challenge is called example-based spectral image super-resolution.

Focus of the contest: it will be made available a newly collected dataset of 350 spectral images collected from and adapted for the specific needs of the challenge. The images are no larger than 1400 pixels on horizontal or vertical direction and have a large diversity of contents. We will refer to this dataset, its partition, and related materials as PIRM2018-SI (Perceptual Image Restoration and Manipulation Spectral Image dataset). The dataset is divided into two subsets (one for each track) whereby each of these is further divided into training, validation and testing data. These two tracks will focus on two distinct settings: (Track 1) "Spectral Image Super-resolution" where the aim is to obtain a high-resolution spectral image at output from a lower-resolved image at input, and (Track 2) "Colour Aided Super-resolution" where the RGB image of the scene is also available as part of the training and validation data. The participants will not have access to the ground truth spectral images from the test data. For each track, the ranking of the participants is according to the performance of their methods on the test data. The participants will provide descriptions of their methods, details on (run) time complexity and (extra) data used for modelling. The winners will be determined according to their entries, the reproducibility of the results and uploaded codes or executables, and the above mentioned criteria as judged by the organizers.

2. Tentative contest schedule

The registered participants will be notified by email if any changes are made to the schedule.

  • May 25 2018 - Release of train data and validation data (only rgb images)
  • July 1 2018 - Validation server online
  • July 25 2018 - Final results submission deadline
  • August 1 2018 - Challenge results released to participants
  • August 22 2018 - Paper submission deadline for top entries from the challenge
  • September 5 2018 - Notification of accepted papers
  • September 14 2018 - PIRM 2018 Workshop

3. Eligibility

You are eligible to register and compete in this contest only if you meet all the following requirements:

  • you are an individual or a team of people willing to contribute to the open tasks, who accepts to follow the rules of this contest
  • you are not a PIRM challenge organizer or an employee of PIRM challenge organizers
  • you are not involved in any part of the administration and execution of this contest
  • you are not a first-degree relative, partner, household member of an employee or of an organizer of PIRM challenge or of a person involved in any part of the administration and execution of this contest

This contest is void wherever it is prohibited by law.

Entries submitted but not qualified to enter the contest, it is considered voluntary and for any entry you submit PIRM reserves the right to evaluate it for scientific purposes, however under no circumstances will such entries qualify for sponsored prizes. If you are an employee, affiliated with or representant of any of the PIRM challenge sponsors then you are allowed to enter in the contest and get ranked, however, if you will rank among the winners with eligible entries you will receive only a diploma award and none of the sponsored money, GPU or travel grants.

NOTE: industry and research labs are allowed to submit entries and to compete in both validation phase and final test phase. However, in order to get officially ranked on the final test leader board and to be eligible for awards the reproducibility of the results is a must. Therefore, the participants need to make available and submit their codes or executables. All the top entries will be checked for reproducibility and marked accordingly.

We will have 3 categories of entries in the final test ranking:
1) checked with publicly released codes
2) checked with publicly released executable
3) unchecked (with or without released codes or executables)

 

4. Entry

In order to be eligible for judging an entry must meet all the following requirements:

Entry contents: the participants are required to submit image results and code or executables. To be eligible for prizes, the top ranking participants should publicly release their code or executables under a license of their choice, taken among popular OSI-approved licenses (http://opensource.org/licenses) and make their code or executables online accessible for a period of not less than one year following the end of the challenge (applies only for top three ranked participants of the competition). To enter the final ranking the participants will need to fill out a survey (fact sheet) briefly describing their method. All the participants are also invited (not mandatory) to submit a paper for peer-reviewing and publication at the PIRM Workshop and challenge on example-based spectral image super-resolution (to be held on September 14, 2018, Munich, Germany). To be eligible for prizes, the participants score must improve the baseline performance provided by the challenge organizers.

Use of data provided: all data provided by PIRM are freely available to the participants from the website of the challenge under license terms provided with the data. The data are available only for open research and educational purposes, within the scope of the challenge. PIRM and the organizers make no warranties regarding the database, including but not limited to warranties of non-infringement or fitness for a particular purpose. The copyright of the images remains in property of their respective owners. By downloading and making use of the data, you accept full responsibility for using the data. You shall defend and indemnify PIRM and the organizers, including their employees, Trustees, officers and agents, against any and all claims arising from your use of the data. You agree not to redistribute the data without this notice.

  • Test data: The organizers will use the test data for the final evaluation and ranking of the entries. The ground truth test data will no be made available to the participants during the contest.
  • Training and validation data: The organizers will make available to the participants a training dataset with ground truth images and a validation dataset without ground truth images. The ground truth images for validation dataset will be released at the start of the final phase when the test data without ground truth images will be made available.
  • Post-challenge analyses: the organizers may also perform additional post-challenge analyses using extra-data, but without effect on the challenge ranking.
  • Submission: the entries will be on-line submitted via the CodaLab web platform. During development phase, while the validation server is online, the participants will receive immediate feedback on validation data. The final evaluation will be computed automatically on the test data submissions, but the final scores will be released after the challenge is over.
  • Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge: - your entry is your own original work; and - your entry only includes material that you own, or that you have permission to use.

5. Potential use of entry

Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:

Are granting us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential prize awards, for the duration of the protection of the copyrights to:

  1. Use, review, assess, test and otherwise analyze results submitted or produced by your code or executable and other material submitted by you in connection with this contest and any future research or contests by the organizers; and
  2. Feature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);

Agree to sign any necessary documentation that may be required for us and our designees to make use of the rights you granted above;

Understand and acknowledge that us and other entrants may have developed or commissioned materials similar or identical to your submission and you waive any claims you may have resulting from any similarities to your entry;

Understand that we cannot control the incoming information you will disclose to our representatives or our co-sponsor’s representatives in the course of entering, or what our representatives will remember about your entry. You also understand that we will not restrict work assignments of representatives or our co-sponsor’s representatives who have had access to your entry. By entering this contest, you agree that use of information in our representatives’ or our co-sponsor’s representatives unaided memories in the development or deployment of our products or services does not create liability for us under this agreement or copyright or trade secret law;

Understand that you will not receive any compensation or credit for use of your entry, other than what is described in these official rules.

If you do not want to grant us these rights to your entry, please do not enter this contest.

6. Submission of entries

The participants will follow the instructions on the CodaLab website to submit entries

The participants will be registered as mutually exclusive teams. Each team is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.

The participants must follow the instructions and the rules. We will automatically disqualify incomplete or invalid entries.

7. Judging the entries

The board of NTIRE will select a panel of judges to judge the entries; all judges will be forbidden to enter the contest and will be experts in causality, statistics, machine learning, computer vision, or a related field, or experts in challenge organization. A list of the judges will be made available upon request. The judges will review all eligible entries received and select winners based upon the prediction scores on test data, factsheet description, complexity and reproducibility. The judges will verify that the winners complied with the rules, including that they documented their method by filling out a fact sheet.

The decisions of these judges are final and binding. The distribution of prizes according to the decisions made by the judges will be made within three (3) months after completion of the last round of the contest. If we do not receive a sufficient number of entries meeting the entry requirements, we may, at our discretion based on the above criteria, not award any or all of the contest prizes below. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the submission platform.

8. Other Sponsored Events

Publishing papers is optional and will not be a condition to entering the challenge or winning prizes. The top ranking participants are invited to submit a maximum 14-pages paper (excluding references, ECCV 2018 author rules) for peer-reviewing to the PIRM workshop.

The results of the challenge will be published together with PIRM 2018 workshop paper in the 2018 ECCV Workshops proceedings.

The top ranked participants and participants contributing interesting and novel methods to the challenge will be invited to be co-authors of the challenge report paper which will be published in the 2018 ECCV Workshops proceedings. A detailed description of the ranked solution as well as the reproducibility of the results are a must to be an eligible co-author.

9. Notifications

If there is any change to data, schedule, instructions of participation, or these rules, the registered participants will be notified at the email they provided with the registration.

Within seven days following the determination of winners we will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may award the prize to an alternate winner, unless forbidden by applicable law.

Any prizes, such as money, GPU, or travel grants will be delivered to the registered team leader given that the team is not affiliated with any of the sponsors. It's up to the team to share the prize. If this person becomes unavailable for any reason, the prize will be delivered to be the authorized account holder of the e-mail address used to make the winning entry.

Winners and potential winners, may be required you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If a potential winner is a minor in his/her place of residence, we require that the parent or legal guardian be designated as the winner, and we may require that they sign a declaration of eligibility, use, indemnity and liability/publicity release on your behalf. If a potential winner, (or your parent/legal guardian if applicable), do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify the potential winner (or the designated parent/legal guardian) and select an alternate selected winner.

 


The terms and conditions are inspired by the `Terms and conditions' of NTIRE 2018 Spectral Reconstruction Challenge and ChaLearn Looking at People Challenges.

The PIRM 2018 Challenge on Example-based Spectral Image Super-resolution

 

Organizers

 

The PIRM 2018 Challenge on Example-based Spectral Image Super-resolution is organized jointly with the PIRM 2018 workshop. The results of the challenge will be published at PIRM 2018 workshop and in the ECCV 2018 Workshops proceedings.

 

Antonio Robles-Kelly (Antonio.Robles-Kelly@data61.csiro.au) and Mehrdad Shoeiby (Mehrdad.Shoeiby@data61.csiro.au) and Radu Timofte (Radu.Timofte@vision.ee.ethz.ch) are the contact persons and direct managers of the PIRM challenge on example-based spectral image super-resolution.

 

More information about PIRM workshop and challenge organizers is available at: https://www.pirm2018.org/

Development

Start: June 8, 2018, 11:59 p.m.

Description: During development phase the participants get access to the data and offline develop their solutions.

Validation

Start: July 9, 2018, midnight

Description: During validation phase the participants have the oportunity to test their solutions on the online server.

Testing

Start: Aug. 10, 2018, midnight

Description: During test phase the participants submit their final results to the server and the fact sheets describing their solutions to the organizers.

Competition Ends

Aug. 17, 2018, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In