The WIDER Face Challenge aims at soliciting new approaches to advance the state-of-the-art in face detection. The challenge uses the WIDER Face dataset, which is a face detection benchmark dataset proposed in CVPR 2016. WIDER Face dataset contains 32,203 images and 393,703 faces bounding box annotations. Faces in the WIDER Face dataset has a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER Face dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. Users are required to submit final prediction files, which we shall proceed to evaluate.
We follow the WIDER Face convention to provide image-level annotations. Each image contains a set of face bounding boxes with the format of "[left, top, width, height]". The annotations share a data structure below:
Different from WIDER Face convention, all the detection results across test images should be writtern in a single text file. For example, if the directory of a testing image is "./0--Parade/0_Parade_marchingband_1_5.jpg", The detection output is expected in the follwing format:
... < # 0--Parade/0_Parade_marchingband_1_5.jpg > < face i1 > < face i2 > ... < face im > ...
The text file should contain 1 row per detected bounding box, in the format "[left, top, width, height, score]". A zip file should be generated to pack the text file. Each zip file should contain only one evaluation result. Do not pack multiple submissions into a single zip file. The evaluation server only accepts a zip as valid input. If the above descriptions are unclear, please see the example of submission file, which can be downloaded with the dataset.
Please check the terms and conditions for further details.
This section describes the detection evaluation metrics used by WIDER Face Challenge. The evaluation code provided can be used to obtain results on the publicly available WIDER Face validation set. It computes average AP metrics described below. To obtain results on the WIDER Face test set, for which ground-truth annotations are hidden, generated results must be uploaded to the evaluation server. The exact same evaluation code, described below, is used to evaluate results on the test set.
The average precision is used for characterizing the performance of an object detector on WIDER Face:
Participants are recommended but not restricted to train their algorithms on the provided train and val sets. The CodaLab page of each track has links to the respective data. The test set is divided into two splits: test-dev and test-challenge. Test-dev is as the default test set for testing under general circumstances and is used to maintain a public leaderboard. Test-challenge is used for the workshop competition; results will be revealed at the workshop. When participating in the task, please be reminded that:
The datasets are released for academic research only and it is free to researchers from educational or research institutions for non-commercial purposes. When downloading the dataset you agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
Copyright © 2018, WIDER Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
For more information, please refer to the challenge webpage or contact us at wider-challenge@ie.cuhk.edu.hk.
Start: May 9, 2018, midnight
Description: In this phase, you can submit the result of validation set and see your rank in leaderboard.
Start: June 18, 2018, midnight
Description: In this phase, we will release testing set and the leaderboard will show the result of testing set.
July 19, 2018, 11:59 a.m.
You must be logged in to participate in competitions.
Sign In