The goal of this challenge is to advance the area of learning knowledge and representation from web data. The web data not only contains huge numbers of visual images, but also rich meta information concerning these visual data, which could be exploited to learn good representations and models. In 2018, we organize one track for this challenge: WebVision Image Classification Task.
The WebVision dataset is composed of training, validation, and test set. The training set is downloaded from Web without any human annotation. The validation and test set are human annotated, where the labels of validation data are provided but the labels of test data are withheld. To imitate the setting of learning from web data, the participants are required to learn their models solely on the training set and submit classification results on the test set. The validation set could only be used to evaluate the algorithms during development (see details in Honor Code). Each submission will produce a list of 5 labels in the descending order of confidence for each image. The recognition accuracy is evaluated based on the label which best matches the ground truth label for the image. Specifically, an algorithm will produce a list of 5 labels for each image and the accuracy of these predictions is defined as the top-5 accuracy over the test images. Since different concepts have different number of test images in WebVision 2.0 dataset, we calculate the mean accuracy for each concept individually, and the final accuracy of the algorithm is the average accuracy across all classes.For this version of the challenge, there is only one ground truth label for each image.
To encourage more teams to participate in this challenge, we will maintain a leaderboard to show the recognition results of all teams on partial test data. In our schedule, we have two phases: develop phase and test phase. The development phase is between March 28 th and June 1th. During this phase, each team can submit result once per week. The test phase is between June 2th and June 8th. During this phase, each team can submit one time with 5 results. The final rank is based on the best of 5 results in the final submission for each team.
This challenge aims to learn knowledge and visual representation from web data without human annotations. Therefore, we request all participants:
March 18, 2018 | Training images, meta information are public |
March 28, 2018 | Validation data and evaluation code are available. Evaluation server is open for submission |
June 02, 2018 | Test phase starts |
June 08, 2018 | Final submission deadline |
June 10, 2018 | Challenge results are released |
June 18, 2018 | Workshop date (co-located with CVPR 2018) |
All deadlines are at 23:59 Pacific Standard Time.
Award will be given to top three performers of each track. In addition all three top ranked participants will be invited to give an oral presentation at the CVPR Workshop 2018. The award is conditioned on (i) attending the workshop, (ii) making an oral presentation of the methods used in the challenge.
By downloading the image data for this challenge you agree to the following terms:
Start: March 28, 2018, midnight
Description: The Development Leaderboard is based on a fixed random subset of 50% of the test images. To submit, upload a .zip file containing a predictions.txt file with the prediction in the format used in the dev kit. An example submission file can be found at: https://data.vision.ee.ethz.ch/aeirikur/webvision2018/example_submission.zip
Start: June 2, 2018, midnight
Description: To submit, upload a .zip file containing a predictions1.txt, ..., predictions5.txt file with the prediction in the format used in the dev kit. The file with the best top-5 accuracy will be used to determine the winner. Please also include a readme.txt file with a description for your entry. An example submission file can be found at: http://vision.ee.ethz.ch/~liwenw/webvision2018/example_submission_testphase.zip
June 10, 2018, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In