WIDER Face & Pedestrain Challenge - Track 2: Pedestrian Detection

Organized by wider - Current server time: Nov. 19, 2018, 11:02 a.m. UTC

Previous

Final Test
June 16, 2018, midnight UTC

Current

Development
May 9, 2018, midnight UTC

End

Competition Ends
July 19, 2018, 11:59 a.m. UTC

Overview

The main goal of the WIDER Person Challenge is to address the problem of detecting pedestrians and cyclist in unconstrained environments. Two main applications of pedestrian detection are taken into consideration, i.e., surveillance and car driving.

Data Description

  Train Val Test
Images 11500 5000 3500
Labels 46513 19696 #

 

All the images are named after their number and have two sources. Images with a number from 1 to 10000 are collected from surveillance cameras, while the rest (from 10001 to 20000) are captured from cameras located on driving vehicles through regular traffic in urban environments.

 

Description of Labels

We provide two categories for the training and validation data, which are walking pedestrian (as label 1) and cyclist (as label 2). Participants may use the two labels for reference during their training process. But in the test stage, we will make no differences between the two categories. In other words, participants only need to submit as the final results the bounding boxes and detection scores of all the pedestrians and cyclists they have detected in the images and do not need to distinguish their categories.

 

Annotation File Format

The images in the training and validation sets are provided with annotations that indicate the bounding box and label for each object. The format of the annotation file is:
[Image name] [label] [bounding box 1 (x y w h)] [label] [bounding box 2] ...
Note: We define w = xmax-xmin, h = ymax-ymin to avoid ambiguity.

 

Ignore Parts Format

We provide the bounding boxes of the ignore parts for the images in the surveillance section of the training the validation sets. But not all images in this section have ignores parts and the ignore parts do not have any labels. The format of the ignore-part files are very similar with the annotation files except for the labels:

	[Image name] [bounding box 1 (x y w h)] [bounding box 2] ...

Note:We define w = xmax-xmin, h = ymax-ymin to avoid ambiguity.

 

Submission Format

Given the test images, participants need to find all the pedestrians and cyclists and submit their bounding boxes and scores in the images with the format specified below. The format of the submitted result file is:

	[Image name 1] [score(confidence)] [bounding box 1 (x y w h)]
	[Image name 1] [score(confidence)] [bounding box 2 (x y w h)]
	...
	[Image name 2] [score(confidence)] [bounding box 1 (x y w h)]

Note: The maximum size of the submission file for the server is 60M. Files larger than the specified size will not be accepted. In addition, the scores(confidence) in the submission file needs to retain 3 decimal places and the bounding boxes to retain 1 decimal place(same as the sample result file for the WIDER Pedestrian track).

 

General Rules

Please check the terms and conditions for further details.

Evaluation Criteria of Testing Data

We will use the same metric as COCO detection evaluation metrics to evaluate the results. Average AP over the 10 IoU thresholds will determine the challenge winner. The Average AP is averaged over 10 Intersection over Union (IoU) thresholds: .50:.05:.95. We will delete the submitted objects whose overlap ratio with the ignore parts is more than 50% in the evaluation stage. Meanwhile, the ground-truth objects which are in the same conditions will also be removed. In other words, we only use the objects in the non-ignoring parts to compute the final Average AP. Please see the evaluation codes for more details.

Terms and Conditions

General Rules

Participants are recommended but not restricted to train their algorithms on the provided train and val sets. The CodaLab page of each track has links to the respective data. The test set is divided into two splits: test-dev and test-challenge. Test-dev is as the default test set for testing under general circumstances and is used to maintain a public leaderboard. Test-challenge is used for the workshop competition; results will be revealed at the workshop. When participating in the task, please be reminded that:

  • Any and all external data used for training must be specified in the "method description" when uploading results to the evaluation server.
  • Results in the correct format must be uploaded to the evaluation server. The evaluation page on the individual site of each challenge track lists detailed information regarding how results will be evaluated.
  • Each entry much be associated to a team and provide its affiliation.
  • The results must be submitted through the CodaLab competition site of each challenge track. The participants can make up to 5 submissions per day in the development phases. A total of 5 submissions are allowed during the final test phase. Using multiple accounts to increase the number of submissions is strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a fact sheet briefly describing their methods. There is no other publication requirement.

Datasets and Annotations

The datasets are released for academic research only and it is free to researchers from educational or research institutions for non-commercial purposes. When downloading the dataset you agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.

Software

Copyright © 2018, WIDER Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the WIDER Consortium nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Contact Us

For more information, please refer to the challenge webpage or contact us at wider-challenge@ie.cuhk.edu.hk.

Development

Start: May 9, 2018, midnight

Description: In this phase, you can submit the result of validation set and see your rank in leaderboard.

Final Test

Start: June 16, 2018, midnight

Description: In this phase, we will release testing set and the leaderboard will show the result of testing set.

Competition Ends

July 19, 2018, 11:59 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 BaiBing 0.7095
2 zhuoranwu 0.5459
3 phunghx 0.5448