The train
set of Dark Zurich is only available on our server via the link https://data.vision.ee.ethz.ch/csakarid/shared/GCMA_UIoU/Dark_Zurich_train_anon.zip. This link is also included in our project website. This is due to size limitations of the CodaLab server. The rest parts of Dark Zurich which are required for the challenge are normally available on CodaLab.
The Vision for All Seasons: Adverse Weather and Lighting Conditions workshop will be held on June 14, 2020 in conjunction with CVPR 2020 in Seattle, USA.
Adverse weather and illumination conditions (e.g. fog, rain, snow, ice, low light, nighttime, glare and shadows) create visibility problems for the sensors that power automated systems. Many outdoor applications such as autonomous cars and surveillance systems are required to operate smoothly in the frequent scenarios of bad weather. While rapid progress is being made in this direction, the performance of current vision algorithms is still mainly benchmarked under clear weather conditions (good weather, favorable lighting). Even the top-performing state-of-the-art algorithms undergo a severe performance degradation under adverse conditions. The aim of the "Vision for All Seasons" workshop is to promote research into the design of robust vision algorithms for adverse weather and illumination conditions.
Jointly with the "Vision for All Seasons" workshop, we organize the "UIoU Dark Zurich" challenge on uncertainty-aware semantic nighttime image segmentation. The challenge uses the Dark Zurich dataset presented in the ICCV 2019 paper " Guided Curriculum Model Adaptation and Uncertainty-Aware Evaluation for Semantic Nighttime Image Segmentation" and containing a total of 8779 images captured at nighttime, twilight, and daytime, along with the respective GPS coordinates of the camera for each image. Evaluation of semantic segmentation models on the labeled nighttime part of Dark Zurich is based on a novel, uncertainty-aware framework in which corresponding daytime images are leveraged at annotation to assign reliable semantic labels to originally indiscernible image regions beyond human recognition capability and to indeed include such invalid regions in the evaluation jointly with valid regions. This evaluation framework is highlighted by UIoU (or uncertainty-aware IoU), a new performance metric that generalizes standard IoU and allows the selective invalidation of predictions, which is crucial for safety-oriented systems handling inputs with potentially ambiguous content, as in the adverse conditions scenario. UIoU rewards models which place higher confidence on valid regions than on invalid ones, i.e. exhibit consistent behavior with human annotators.
For training their models, participants are not given a labeled training set but rather an unlabeled one. They are encouraged to additionally leverage external sources of strong supervision (e.g. pretrained models on daytime datasets), the weak supervision provided by the cross-time-of-day correspondences in Dark Zurich, and domain adaptation techniques.
The UIoU Dark Zurich challenge aims to establish our novel UIoU semantic segmentation evaluation, based on the Uncertainty-aware IoU metric, for usage on nighttime or other adverse condition datasets with potentially ambiguous image content. In general terms, the task is to parse nighttime images from Dark Zurich into the standard set of 19 Cityscapes classes and pixels where the content is deemed as uncertain (invalid).
UIoU Dark Zurich runs in two phases: the development phase and the testing phase. Performance associated with submissions is reported in a leaderboard in each phase.
The existing 151 nighttime test ground-truth annotations are withheld, so that they serve permanently as an objective benchmark for the task of semantic nighttime image segmentation. The 50 nighttime validation annotations are to be made publicly available after the completion of the challenge, during which they are also withheld.
In each phase, participants need to submit a .zip
file containing three (3) subdirectories (corresponding to three different result modalities), with the following names:
labelTrainIds
: predictions of semantic labels encoded using png
images, where pixel values encode labels in Cityscapes trainIDs format according to Cityscapes documentation script helpers/labels.py.confidence
: confidence maps corresponding to the predicted labels, encoded using uint16 png
images, where pixel values range from 0 to 65535. A value of 0 corresponds to confidence equal to 0.0, a value of 65535 corresponds to confidence equal to 1.0, and all in-between pixel values are mapped to confidence values with linear interpolation.labelTrainIds_invalid
: predictions of semantic labels including the special label invalid
, encoded using png
images. Pixel values encode labels in Cityscapes trainIDs format or the invalid
label for the value 255
. An invalid
prediction for a pixel indicates that the model has not made a prediction for that pixel, typically due to low associated confidence.Each directory needs to contain exactly one result file for every image in the evaluation set. In particular, for each image with a file name in the format {sequence}_frame_{frame:0>6}_{type}{ext}
, the evaluation script searches the directory for a matching file with a name following the pattern {sequence}_frame_{frame:0>6}*.png
. If zero matches are found, or two or more matching files are detected, the evaluation fails.
The three result modalities are used to compute three performance metrics:
labelTrainIds_invalid
predictions. If no pixel is predicted as invalid
, it is by definition equal to IoU.labelTrainIds
and the confidence
predictions. A total of 101 confidence thresholds uniformly distributed between 1/19
and 1
are applied to confidence
predictions in order to selectively invalidate labelTrainIds
predictions. Average UIoU is calculated by averaging the UIoU results over all thresholds.labelTrainIds
predictions.These are the official rules (terms and conditions) that govern how the UIoU Dark Zurich challenge on uncertainty-aware semantic nighttime image segmentation will operate. This challenge will be simply referred to as the "challenge", the "competition" or the "contest" throughout the remaining part of these rules and may be named as "UIoU Dark Zurich" or "Vision for All Seasons" benchmark, challenge, competition or contest, elsewhere, including but not limited to our webpage, our documentation, and other publications.
In these rules, "we", "our", and "us" refer to the organizers (csakarid [at] vision.ee.ethz.ch and dai [at] vision.ee.ethz.ch) of the challenge and "you" and "yourself" refer to an eligible contest participant.
This is a skill-based contest and chance plays no part in the determination of the winner(s).
The goal of the contest is to correctly parse the semantic content of nighttime images.
Focus of the contest: the Dark Zurich dataset will be made available for the challenge. The dataset is divided into three subsets: training, validation, and test. The participants will not have access to the ground truth semantic labels of the test data. The ranking of the participants is according to the performance of their methods on the test data. The primary performance metric for determining the ranking is UIoU (uncertainty-aware IoU). The winners will be determined according to the ranking of their entries and other additional criteria (including but not limited to the novelty of the developed methods) as judged by the organizers.
The Dark Zurich dataset is made freely available, either in the context of the challenge or outside it, to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree:
The registered participants will be notified by e-mail if any changes are made to the schedule. The schedule is available on the Vision for All Seasons workshop website and on the Overview page of the present CodaLab competition website.
You are eligible to compete in this contest only if you meet all the following requirements:
This contest is void wherever it is prohibited by law.
NOTE: industry and research labs are allowed to submit entries and to compete both in the development phase and the test phase.
In order to be eligible for judging, an entry must meet all the following requirements:
Entry contents: the participants are required to submit result files in zipped archives. In order to deem participants as eligible for winning the competition, we reserve the right to apply additional criteria apart from the test set leaderboard ranking of the entry. Such criteria include but are not limited to the reproducibility of the results and the novelty of the method used for the relevant entry.
Other than what is set forth below, we are not claiming any ownership rights to your entry. However, by submitting your entry, you:
If you do not want to grant us these rights to your entry, please do not enter this contest.
The participants will follow the instructions on the CodaLab website to submit entries.
Each participant is allowed to submit only one single final entry. We are not responsible for entries that we do not receive for any reason, or for entries that we receive but do not work properly.
The participants must follow the official rules. We will immediately disqualify invalid entries.
The board of Vision for All Seasons will judge the entries. The judges will review all eligible entries received and determine a list of winners of the competition based upon the performance on the test set and the additional criteria mentioned in paragraph 5. The judges will verify that the winners complied with the rules.
If we do not receive a sufficient number of entries meeting the requirements, we may, at our discretion based on the above criteria, not declare any winner for the contest. In the event of a tie between any eligible entries, the tie will be broken by giving preference to the earliest submission, using the time stamp of the CodaLab submission platform.
We will send a notification to the potential winners. If the notification that we send is returned as undeliverable, or you are otherwise unreachable for any reason, we may disqualify you from the list of winners and select another eligible participant in your place, unless forbidden by applicable law.
If you are a potential winner, we may require you to sign a declaration of eligibility, use, indemnity and liability/publicity release and applicable tax forms. If you (or your parent/legal guardian if applicable) do not sign and return these required forms within the time period listed on the winner notification message, we may disqualify you from the list of winners and select another eligible participant in your place.
The terms and conditions are inspired by and use verbatim text from the "Terms and conditions" of ChaLearn Looking at People Challenges and of the NTIRE 2017, 2018, 2019 and 2020 challenges.
Start: Feb. 29, 2020, midnight
Description: Development phase - create models based on the training and validation set and submit results on the validation set.
Start: May 25, 2020, midnight
Description: Testing phase - submit results on the test set.
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | carlqwe | 45.36 |
2 | wuxinfeiyang | 45.15 |
3 | hzcxq | 43.98 |