UIoU Dark Zurich at Vision for All Seasons Workshop, CVPR 2020

Organized by sakaridis - Current server time: Sept. 27, 2020, 1:57 p.m. UTC

Previous

Development
Feb. 29, 2020, midnight UTC

Current

Testing
May 25, 2020, midnight UTC

End

Competition Ends
Never

The UIoU Dark Zurich Challenge @ Vision for All Seasons Workshop, CVPR 2020

Important Dates

  • 2020.02.29 Release of unlabeled training set (RGB images and cross-time-of-day correspondences) and validation set (only RGB images, annotations withheld for development phase)
  • 2020.02.29 Validation server online
  • 2020.05.25 Release of final test set (only RGB images, annotations withheld)
  • 2020.06.01 Submission deadline for test set results
  • 2020.06.01 Paper submission deadline for entries from the challenge
  • 2020.06.14 Vision for All Seasons workshop, challenge results and announcement of winners (CVPR 2020, Seattle, USA)

Download Note

The train set of Dark Zurich is only available on our server via the link https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_train_anon.zip. This link is also included in our project website. This is due to size limitations of the CodaLab server. The rest parts of Dark Zurich which are required for the challenge are normally available on CodaLab.

Challenge Overview

The Vision for All Seasons: Adverse Weather and Lighting Conditions workshop will be held on June 14, 2020 in conjunction with CVPR 2020 in Seattle, USA.

Adverse weather and illumination conditions (e.g. fog, rain, snow, ice, low light, nighttime, glare and shadows) create visibility problems for the sensors that power automated systems. Many outdoor applications such as autonomous cars and surveillance systems are required to operate smoothly in the frequent scenarios of bad weather. While rapid progress is being made in this direction, the performance of current vision algorithms is still mainly benchmarked under clear weather conditions (good weather, favorable lighting). Even the top-performing state-of-the-art algorithms undergo a severe performance degradation under adverse conditions. The aim of the "Vision for All Seasons" workshop is to promote research into the design of robust vision algorithms for adverse weather and illumination conditions.

Jointly with the "Vision for All Seasons" workshop, we organize the "UIoU Dark Zurich" challenge on uncertainty-aware semantic nighttime image segmentation. The challenge uses the Dark Zurich dataset presented in the ICCV 2019 paper " Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation" and containing a total of 8779 images captured at nighttime, twilight, and daytime, along with the respective GPS coordinates of the camera for each image. Evaluation of semantic segmentation models on the labeled nighttime part of Dark Zurich is based on a novel, uncertainty-aware framework in which corresponding daytime images are leveraged at annotation to assign reliable semantic labels to originally indiscernible image regions beyond human recognition capability and to indeed include such invalid regions in the evaluation jointly with valid regions. This evaluation framework is highlighted by UIoU (or uncertainty-aware IoU), a new performance metric that generalizes standard IoU and allows the selective invalidation of predictions, which is crucial for safety-oriented systems handling inputs with potentially ambiguous content, as in the adverse conditions scenario. UIoU rewards models which place higher confidence on valid regions than on invalid ones, i.e. exhibit consistent behavior with human annotators.

For training their models, participants are not given a labeled training set but rather an unlabeled one. They are encouraged to additionally leverage external sources of strong supervision (e.g. pretrained models on daytime datasets), the weak supervision provided by the cross-time-of-day correspondences in Dark Zurich, and domain adaptation techniques.

Provided Resources

  • Scripts: The participants are provided the evaluation script that is used to compute performance on the server. Contact: Participants can use the forum (recommended) or directly contact the challenge organizers via e-mail in case of doubts or questions.

The UIoU Dark Zurich Challenge @ Vision for All Seasons Workshop, CVPR 2020

Evaluation

The UIoU Dark Zurich challenge aims to establish our novel UIoU semantic segmentation evaluation, based on the Uncertainty-aware IoU metric, for usage on nighttime or other adverse condition datasets with potentially ambiguous image content. In general terms, the task is to parse nighttime images from Dark Zurich into the standard set of 19 Cityscapes classes and pixels where the content is deemed as uncertain (invalid).

UIoU Dark Zurich runs in two phases: the development phase and the testing phase. Performance associated with submissions is reported in a leaderboard in each phase.

  1. In the development phase, participants are given access to the training and validation image sets of Dark Zurich and are expected to optimize their models based on performance on the validation set. Validation annotations are withheld in this phase.
  2. In the testing phase, participants are given access to the test image set and are expected to use their optimized model from the preceding development phase in order to make predictions on the test set. Thus, this phase lasts only for a few days.

The existing 151 nighttime test ground-truth annotations are withheld, so that they serve permanently as an objective benchmark for the task of semantic nighttime image segmentation. The 50 nighttime validation annotations are to be made publicly available after the completion of the challenge, during which they are also withheld.

In each phase, participants need to submit a .zip file containing three (3) subdirectories (corresponding to three different result modalities), with the following names:

  • labelTrainIds: predictions of semantic labels encoded using png images, where pixel values encode labels in Cityscapes trainIDs format according to Cityscapes documentation script helpers/labels.py.
  • confidence: confidence maps corresponding to the predicted labels, encoded using uint16 png images, where pixel values range from 0 to 65535. A value of 0 corresponds to confidence equal to 0.0, a value of 65535 corresponds to confidence equal to 1.0, and all in-between pixel values are mapped to confidence values with linear interpolation.
  • labelTrainIds_invalid: predictions of semantic labels including the special label invalid, encoded using png images. Pixel values encode labels in Cityscapes trainIDs format or the invalid label for the value 255. An invalid prediction for a pixel indicates that the model has not made a prediction for that pixel, typically due to low associated confidence.

Each directory needs to contain exactly one result file for every image in the evaluation set. In particular, for each image with a file name in the format {sequence}_frame_{frame:0>6}_{type}{ext}, the evaluation script searches the directory for a matching file with a name following the pattern {sequence}_frame_{frame:0>6}*.png. If zero matches are found, or two or more matching files are detected, the evaluation fails.

The three result modalities are used to compute three performance metrics:

  1. UIoU: The primary metric of the challenge. It is calculated from the labelTrainIds_invalid predictions. If no pixel is predicted as invalid, it is by definition equal to IoU.
  2. Average UIoU: It is calculated from the labelTrainIds and the confidence predictions. A total of 101 confidence thresholds uniformly distributed between 1/19 and 1 are applied to confidence predictions in order to selectively invalidate labelTrainIds predictions. Average UIoU is calculated by averaging the UIoU results over all thresholds.
  3. IoU: The standard IoU metric for semantic segmentation. It is calculated from the labelTrainIds predictions.

The UIoU Dark Zurich Challenge @ Vision for All Seasons Workshop, CVPR 2020

Terms and Conditions

These are the official rules (terms and conditions) that govern how the UIoU Dark Zurich challenge on uncertainty-aware semantic nighttime image segmentation will operate. This challenge will be simply referred to as the "challenge", the "competition" or the "contest" throughout the remaining part of these rules and may be named as "UIoU Dark Zurich" or "Vision for All Seasons" benchmark, challenge, competition or contest, elsewhere, including but not limited to our webpage, our documentation, and other publications.

In these rules, "we", "our", and "us" refer to the organizers (csakarid [at] vision.ee.ethz.ch and dai [at] vision.ee.ethz.ch) of the challenge and "you" and "yourself" refer to an eligible contest participant.

1. Contest description

This is a skill-based contest and chance plays no part in the determination of the winner(s).

The goal of the contest is to correctly parse the semantic content of nighttime images.

Focus of the contest: the Dark Zurich dataset will be made available for the challenge. The dataset is divided into three subsets: training, validation, and test. The participants will not have access to the ground truth semantic labels of the test data. The ranking of the participants is according to the performance of their methods on the test data. The primary performance metric for determining the ranking is UIoU (uncertainty-aware IoU). The winners will be determined according to the ranking of their entries and other additional criteria (including but not limited to the novelty of the developed methods) as judged by the organizers.

2. License agreement for the Dark Zurich dataset

The Dark Zurich dataset is made freely available, either in the context of the challenge or outside it, to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:

  1. That the dataset comes "AS IS", without express or implied warranty. Although every effort has been made to ensure accuracy, we (ETH Zurich) do not accept any responsibility for errors or omissions.
  2. That you include a reference to the Dark Zurich Dataset in any work that makes use of the dataset. For research papers, cite the relevant publications as listed on the website https://www.trace.ethz.ch/publications/2019/GCMA_UIoU/; for other media, cite the relevant publications as listed on the aforementioned website or include a link to this website.
  3. That you do not distribute this dataset. It is permissible to distribute derivative works in as far as they are additional annotations of this dataset that do not directly include any of our data, or constitute abstract representations of the dataset (such as models trained on it).
  4. That you may not use the dataset or any derivative work for commercial purposes as, for example, licensing or selling the data, or using the data with a purpose to procure a commercial gain.
  5. That all rights not expressly granted to you are reserved by us.

3. Tentative contest schedule

The registered participants will be notified by e-mail if any changes are made to the schedule. The schedule is available on the Vision for All Seasons workshop website and on the Overview page of the present CodaLab competition website.

4. Eligibility

You are eligible to compete in this contest only if you meet all the following requirements:

  • you are an individual or a team of people willing to contribute to the focus task, who accepts to follow the rules of this contest
  • you are not a Vision for All Seasons challenge organizer or an employee of Vision for All Seasons challenge organizers
  • you are not involved in any part of the administration and execution of this contest
  • you are not a first-degree relative, partner, household member of an employee or of an organizer of the Vision for All Seasons challenge or of a person involved in any part of the administration and execution of this contest

This contest is void wherever it is prohibited by law.

NOTE: industry and research labs are allowed to submit entries and to compete both in the development phase and the test phase.

5. Entry

In order to be eligible for judging, an entry must meet all the following requirements:

Entry contents: the participants are required to submit result files in zipped archives. In order to deem participants as eligible for winning the competition, we reserve the right to apply additional criteria apart from the test set leaderboard ranking of the entry. Such criteria include but are not limited to the reproducibility of the results and the novelty of the method used for the relevant entry.

  • Submission: the entries will be submitted online via the CodaLab web platform. During development and testing phase, the participants will receive immediate online feedback on the performance of their submissions on the validation set and the test set respectively.
  • Original work, permissions: In addition, by submitting your entry into this contest you confirm that, to the best of your knowledge, 1) your entry is your own original work; and 2) your entry only includes material that you own, or that you have permission to use.

6. Potential use of entry

Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:

  • Grant us an irrevocable, worldwide right and license, in exchange for your opportunity to participate in the contest and potential awards, for the duration of the protection of the copyrights to:
    1. Use, review, assess, test and otherwise analyze results and other material submitted by you in the context of this contest and any future research or contests by the organizers; and
    2. Feature your entry and all its content in connection with the promotion of this contest in all media (now known or later developed);
  • Agree to sign any necessary documentation that may be required for us or our designees to make use of the rights you granted above;
  • Understand and acknowledge that we or other participants may have developed or commissioned materials similar to your submission, and waive any claims you may have resulting from any similarities to your entry;
  • Understand that you will not receive any compensation or credit for your entry, other than what is described in these official rules.

If you do not want to grant us these rights to your entry, please do not enter this contest.

7. Submission of entries

The participants will follow the instructions on the CodaLab website to submit entries.

Each participant is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.

The participants must follow the official rules. We will immediately disqualify invalid entries.

8. Judging the entries

The board of Vision for All Seasons will judge the entries. The judges will review all eligible entries received and determine a list of winners of the competition based upon the performance on the test set and the additional criteria mentioned in paragraph 5. The judges will verify that the winners complied with the rules.

If we do not receive a sufficient number of entries meeting the requirements, we may, at our discretion based on the above criteria, not declare any winner for the contest. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the CodaLab submission platform.

9. Notifications

We will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may disqualify you from the list of winners and select another eligible participant in your place, unless forbidden by applicable law.

If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If you (or your parent/legal guardian if applicable) do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you from the list of winners and select another eligible participant in your place.

 


The terms and conditions are inspired by and use verbatim text from the "Terms and conditions" of ChaLearn Looking at People Challenges and of the NTIRE 2017, 2018, 2019 and 2020 challenges.

Development

Start: Feb. 29, 2020, midnight

Description: Development phase - create models based on the training and validation set and submit results on the validation set.

Testing

Start: May 25, 2020, midnight

Description: Testing phase - submit results on the test set.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 carlqwe 45.36
2 wuxinfeiyang 40.75
3 snisarg812 38.41