UG2+ 2021 Track1.2 - Face Detection in the Low-Light Condition

Organized by flyywh - Current server time: March 30, 2025, 7:40 a.m. UTC

First phase

Dry-run
Feb. 1, 2021, midnight UTC

End

Competition Ends
May 1, 2021, 3 a.m. UTC

UG2+ 2021 Track1.2 - Face Detection in the Low-Light Condition

Official Website: http://www.ug2challenge.org/

We provide 6,000 real-world low light images captured during the nighttime, at teaching buildings, streets, bridges, overpasses, parks etc., all labeled with bounding boxes for of human face, as the main training and/or validation sets. There will be a hold-out testing set of 4,000 low-light images, with human face bounding boxes annotated. 

Evaluation

There are 2 phases:

  • Phase 1: validation phase. We provide you with labeled training data and validation data. You can make predictions for both datasets. However, you will receive feedback on your performance on the validation set only. The performance of your LAST submission will be displayed on the leaderboard.
  • Phase 2: testing phase. You can make predictions for the test dataset. Your last submission of phase 1 will be automatically forwarded. Your performance on the test set will appear on the leaderboard when the organizers finish checking the submissions.

This competition only allows you to submit the prediction results (no code):

However, the winner and the two runner-ups are required to submit their model codes. Challenge organizers will test the reproducibility. Failure of reproducing the performance will lead to the result on the leaderboard marked as invalid.

The submissions are evaluated using the mAP_metric metric. The metric is re-scaled linearly between 0 and 1, 0 corresponding to a random guess and 1 to perfect predictions.

UG2+ 2021: Rules

Registration must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.

Read carefully the following guidelines before submitting. Methods not complying with the guidelines will be disqualified.

  • We encourage participants to use the provided training and validation data for each task, as well as to make use of their own data or data from other sources for training. However, the use of any form of annotation or use of any of the provided benchmark test sets for either supervised or unsupervised training is strictly forbidden.
  • The team name of submissions on Codalab must match the registration information. Any submission with a team name not registered will not be qualified for prizes. Only a single submission per team can be the winner of a single sub-challenge. Changes in algorithm parameters do not constitute a different method, all parameter tuning must be conducted using the dataset provided and any additional data the participants consider appropriate.

Dry-run

Start: Feb. 1, 2021, midnight

Description: Dry-run phase: 100 dry-run data for debugging purpose only. This set is provided just to validate your prediction format (so you can work with our final testing in the end). mIoU on this set is not accurate due to small amount of data. Feedback are provided on this dry-run set only.

Testing

Start: April 30, 2021, midnight

Description: Final testing phase: The results on the testing set will be revealed when the organizers make them available.

Competition Ends

May 1, 2021, 3 a.m.

You must be logged in to participate in competitions.

Sign In