The WIDER Face Challenge aims at soliciting new approaches to advance the state-of-the-art in face detection. The challenge uses the WIDER Face dataset, which is a face detection benchmark dataset proposed in CVPR 2016. WIDER Face dataset contains 32,203 images and 393,703 faces bounding box annotations. Faces in the WIDER Face dataset has a high degree of variability in scale, pose and occlusion as depicted in the sample images. WIDER Face dataset is organized based on 61 event classes. For each event class, we randomly select 40%/10%/50% data as training, validation and testing sets. Users are required to submit final prediction files, which we shall proceed to evaluate.
We follow the WIDER Face convention to provide image-level annotations. Each image contains a set of face bounding boxes with the format of "[left, top, width, height]". The annotations share a data structure below:
Different from WIDER Face convention, all the detection results across test images should be written in a single text file. For example, if the directory of a testing image is "./0--Parade/0_Parade_marchingband_1_5.jpg", The detection output is expected in the following format:
... < # 0--Parade/0_Parade_marchingband_1_5.jpg > < face i1 > < face i2 > ... < face im > ...
The text file should contain 1 row per detected bounding box, in the format "[left, top, width, height, score]". A zip file should be generated to pack the text file. Each zip file should contain only one evaluation result. Do not pack multiple submissions into a single zip file. The evaluation server only accepts a zip as valid input. If the above descriptions are unclear, please see the example of the submission file, which can be downloaded with the dataset.
In order to encourage the practical use of current state-of-the-art algorithms, we add an experimental track by considering the runtime of the face detection algorithm. We will provide a basic docker container and a guideline for the usage of the docker which will be announced later.
The participates need to wrap the algorithm into the docker and implement the predefined interface to generating valid detection output. If the above descriptions are unclear, please see the example of basic docker container and guidelines.
Please check the terms and conditions for further details.
This section describes the detection evaluation metrics used by WIDER Face Challenge. The evaluation code provided can be used to obtain results on the publicly available WIDER Face validation set. It computes the average AP metrics described below. To obtain results on the WIDER Face test set, for which ground-truth annotations are hidden, generated results must be uploaded to the evaluation server. The exact same evaluation code, described below, is used to evaluate results on the test set.
The average precision is used for characterizing the performance of an object detector on WIDER Face:
The averaged AP and runtime are taken into consideration. The details will be released later.
Participants are recommended but not restricted to train their algorithms on the provided train and val sets. The CodaLab page of each track has links to the respective data. The test set is divided into two splits: test-dev and test-challenge. Test-dev is as the default test set for testing under general circumstances and is used to maintain a public leaderboard. Test-challenge is used for the workshop competition; results will be revealed at the workshop. When participating in the task, please be reminded that:
The datasets are released for academic research only and it is free to researchers from educational or research institutions for non-commercial purposes. When downloading the dataset you agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.
Copyright © 2019, WIDER Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Start: May 10, 2019, 6:59 a.m.
Description: In this phase, you can submit the result of validation set and see your rank in leaderboard.
Start: June 26, 2019, 6:59 a.m.
Description: In this phase, we will release testing set and the leaderboard will show the result of testing set.
Start: June 26, 2019, 6:59 a.m.
Description: In this phase, we will show the runtime in detailed result page.
July 26, 2019, 6:59 a.m.
You must be logged in to participate in competitions.Sign In