The goal of this challenge is to advance the area of learning knowledge and representation from web data. The web data not only contains huge numbers of visual images, but also rich meta information concerning these visual data, which could be exploited to learn good representations and models. We organize two tasks to evaluate the learned knowledge and representation: (1) WebVision Image Classification Task, and (2) Pascal VOC Transfer Learning Task. The second task is built upon the first task. Researchers can participate into only the first task, or both tasks.
The WebVision dataset is composed of training, validation, and test set. The training set is downloaded from Web without any human annotation. The validation and test set are human annotated, where the labels of validation data are provided but the labels of test data are withheld. To imitate the setting of learning from web data, the participants are required to learn their models solely on the training set and submit classification results on the test set. The validation set could only be used to evaluate the algorithms during development (see details in Honor Code). Each submission will produce a list of 5 labels in the descending order of confidence for each image. The recognition accuracy is evaluated based on the label which best matches the ground truth label for the image. Specifically, an algorithm will produce a list of 5 labels for each image and the accuracy these predictions is defined as the top-5 accuracy over the test images. For this version of the challenge, there is only one ground truth label for each image.
To encourage more teams to participate in this challenge, we will maintain a leaderboard to show the recognition results of all teams on partial test data. In our schedule, we have two phases: develop phase and test phase. The development phase is between March 27 th and June 8th. During this phase, each team can submit result once per week. The test phase is between June 9th and June 15th. During this phase, each team can submit one time with 5 results. The final rank is based on the best of 5 results in the final submission for each team.
This challenge aims to learn knowledge and visual representation from web data without human annotations. Therefore, we request all participants:
|March 15, 2017||Develop kit, data, and evaluation code are public|
|June 15, 2017||Final submission deadline|
|July 15, 2017||Challenge results are released|
|July 26, 2017||Workshop date (co-located with CVPR'17)|
All deadlines are at 23:59 Pacific Standard Time.
Award will be given to top 3 performers of each track.
By downloading the image data for this challenge you agree to the following terms:
Start: March 15, 2017, midnight
Description: The Development Leaderboard is based on a fixed random subset of 50% of the test images. To submit, upload a .zip file containing a predictions.txt file with the prediction in the format used in the dev kit. (
Start: June 23, 2017, midnight
Description: o submit, upload a .zip file containing a predictions1.txt, ..., predictions5.txt file with the prediction in the format used in the dev kit. The file with the best top-5 accuracy will be used to determine the winner. Please also include a readme.txt file with a description for your entry. An example submission file can be found at: https://data.vision.ee.ethz.ch/cvl/webvision/example_submission_classification.zip
You must be logged in to participate in competitions.Sign In