WIDER Face & Pedestrain Challenge - Track 3: Person Search

Organized by wider - Current server time: May 27, 2018, 12:51 a.m. UTC

Current

Development
May 9, 2018, midnight UTC

Next

Final Test
June 18, 2018, midnight UTC

End

Competition Ends
July 18, 2018, midnight UTC

Overview

To search for a person in a large database with just a single image is a useful and challenging task. In WIDER Person Challenge, you are given an image of a target cast and some candidates (frames of a movie with person bounding boxes), you are asked to search for all the instances belonging to that cast.



Data Description

The data comes from 192 movies, among which 115 are used for training, 19 for validation and 58 for testing. For each movie, the main cast (top 10 in the cast list of IMDb) are collected as queries. The query profile comes from the homepoge of the cast in IMDb or TMDb. The candidates are extracted from the key frames of the movie, in which the bounding boxes and identities of persons are manually annotated. A candidate is either annotated as one of the main casts or as "others". Here "others" means that a candidate does not belong to any of the main casts of that movie.

There are 1006 queries in training, 147 in validation and 373 in test. The average number of candidates of each split are 690, 796 and 560 per movie.

We provide all the meta data for this task in a JSON file. The structure of the JSON file is:


{
    "tt0056923": {
        "cast": [
            {
                "id": "tt0056923_nm0094585",
                "img": "tt0056923/cast/nm0094585.jpg",
                "label": "nm0094585"
            }
        ],
        "candidates": [
            {
                "id": "tt0056923_0000",
                "img": "tt0056923/candidates/shot0368img0.jpg",
                "bbox": [
                    152,
                    126,
                    518,
                    603
                ],
                "label": "nm0000030"
            }
        ]
    }
}

Each movie contains "cast" and "candidates" domains.
"id" is used to refer each instance in the submission file.
"img" is the image path of the instance.
"label" is the identity.
"bbox" is the bounding box location (x, y, w, h) of the candidate in the image.


Submission Format

The submission file should be a zipped txt file. Please do not put the txt file in a folder, you should zip it directly.

For each query cast in the test set, you must predict a comma-delimited list of candidates. The list should be sorted, such that the first candidate is considered the most relevant one, and the last the least relevant one. The file should contain the id of the query and the candidates list, which is delimited by space. An example is shown below:

    tt0056923_nm0094585 tt0056923_0002,tt0056923_0006,tt0056923_0029,tt0056923_0011,tt0056923_0001
    tt0209144_nm0001592 tt0209144_0001,tt0209144_0009,tt0209144_0231,tt0209144_0233,tt0209144_0022,tt0209144_0222,tt0209144_0007

General Rules

Please check the terms and conditions for further details.

Evaluation Criteria

Submissions are evaluated according to mean Average Precision (mAP):

where:

Q is the number of query cast

mq is the number of the candidates with the same identity to the query

nq is the number of all candidates in the movie

Pq(k) is the precision at rank k for the q-th query

relq(k) denotes the relevance of prediction k for the q-th query, it's 1 if the k-th prediction is correct and 0 otherwise

Terms and Conditions

General Rules

Participants are recommended but not restricted to train their algorithms on the provided train and val sets. The CodaLab page of each track has links to the respective data. The test set is divided into two splits: test-dev and test-challenge. Test-dev is as the default test set for testing under general circumstances and is used to maintain a public leaderboard. Test-challenge is used for the workshop competition; results will be revealed at the workshop. When participating in the task, please be reminded that:

  • Any and all external data used for training must be specified in the "method description" when uploading results to the evaluation server.
  • Results in the correct format must be uploaded to the evaluation server. The evaluation page on the individual site of each challenge track lists detailed information regarding how results will be evaluated.
  • Each entry much be associated to a team and provide its affiliation.
  • The results must be submitted through the CodaLab competition site of each challenge track. The participants can make up to 5 submissions per day in the development phases. A total of 5 submissions are allowed during the final test phase. Using multiple accounts to increase the number of submissions is strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The best entry of each team will be public in the leaderboard at all time.
  • To compete for awards, the participants must fill out a fact sheet briefly describing their methods. There is no other publication requirement.

Datasets and Annotations

The datasets are released for academic research only and it is free to researchers from educational or research institutions for non-commercial purposes. When downloading the dataset you agree not to reproduce, duplicate, copy, sell, trade, resell or exploit for any commercial purposes, any portion of the images and any portion of derived data.

Software

Copyright © 2018, WIDER Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the WIDER Consortium nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Contact Us

For more information, please refer to the challenge webpage or contact us at wider-challenge@ie.cuhk.edu.hk.

Development

Start: May 9, 2018, midnight

Description: In this phase, you can submit the result of validation set and see your rank in leaderboard.

Final Test

Start: June 18, 2018, midnight

Description: In this phase, we will release testing set and the leaderboard will show the result of testing set.

Competition Ends

July 18, 2018, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 wider 0.5375