WebVision Challenge 2019

Organized by 07wanglimin - Current server time: March 22, 2019, 6:45 p.m. UTC

Current

Development
March 1, 2019, midnight UTC

Next

Testing
June 1, 2019, midnight UTC

End

Competition Ends
June 7, 2019, 11:59 p.m. UTC

Challenge

The goal of this challenge is to advance the area of learning knowledge and representation from web data. The web data not only contains huge numbers of visual images, but also rich meta information concerning these visual data, which could be exploited to learn good representations and models. In 2019, we organize one track for this challenge: WebVision Image Classification Task.

WebVision Image Classification Task

The WebVision dataset is composed of training, validation, and test set. The training set is downloaded from Web without any human annotation. The validation and test set are human annotated, where the labels of validation data are provided but the labels of test data are withheld. To imitate the setting of learning from web data, the participants are required to learn their models solely on the training set and submit classification results on the test set. The validation set could only be used to evaluate the algorithms during development (see details in Honor Code). Each submission will produce a list of 5 labels in the descending order of confidence for each image. The recognition accuracy is evaluated based on the label which best matches the ground truth label for the image. Specifically, an algorithm will produce a list of 5 labels for each image and the accuracy of these predictions is defined as the top-5 accuracy over the test images. Since different concepts have different number of test images in WebVision 2.0 dataset, we calculate the mean accuracy for each concept individually, and the final accuracy of the algorithm is the average accuracy across all classes.For this version of the challenge, there is only one ground truth label for each image.

Submission Policy

To encourage more teams to participate in this challenge, we will maintain a leaderboard to show the recognition results of all teams on partial test data. In our schedule, we have two phases: develop phase and test phase. The development phase is between March 1st and May 31st. During this phase, each team can submit result once per week. The test phase is between June 1st and June 7th. During this phase, each team can submit one time with 5 results. The final rank is based on the best of 5 results in the final submission for each team.

Honor Code

This challenge aims to learn knowledge and visual representation from web data without human annotations. Therefore, we request all participants:

  1. During training phase, algorithms can be designed to learn from web images and meta information on the training set. Only the provided training data can be used. External data is strictly prohibited in this challenge.
  2. For this task, the validation set is human annotated. Therefore, it could only be used to evaluate the algorithms during development. It is prohibited to use the validation set in other ways during training phase, for example, merging validation set into training set, using validation set to filter noisy images in training set.

 

Important Dates

Feburary 18, 2019 Training images, meta information are public
March 1, 2019 Validation data and evaluation code are available. Evaluation server is open for submission
June 01, 2019 Test phase starts
June 07, 2019 Final submission deadline
June 10, 2019 Challenge results are released
June 16, 2019 Workshop date (co-located with CVPR 2019)

All deadlines are at 23:59 Pacific Standard Time.

Awards

Award will be given to top three performers of each track. In addition all three top ranked participants will be invited to give an oral presentation at the CVPR Workshop 2019. The award is conditioned on (i) attending the workshop, (ii) making an oral presentation of the methods used in the challenge.

Terms of Use

By downloading the image data for this challenge you agree to the following terms:

  1. You will not distribute the images.
  2. ETH Zurich makes no representations or warranties regarding the data, including but not limited to warranties of non-infringement or fitness for a particular purpose.
  3. You accept full responsibility for your use of the data and shall defend and indemnify ETH Zurich, including its employees, officers and agents, against any and all claims arising from your use of the data, including but not limited to your use of any copies of copyrighted images that you may create from the data.
No files have been added for this competition yet.

Development

Start: March 1, 2019, midnight

Description: The Development Leaderboard is based on a fixed random subset of 50% of the test images. To submit, upload a .zip file containing a predictions.txt file with the prediction in the format used in the dev kit. An example submission file can be found at: http://www.vision.ee.ethz.ch/webvision/files/example_submission.zip

Testing

Start: June 1, 2019, midnight

Description: To submit, upload a .zip file containing a predictions1.txt, ..., predictions5.txt file with the prediction in the format used in the dev kit. The file with the best top-5 accuracy will be used to determine the winner. Please also include a readme.txt file with a description for your entry. An example submission file can be found at: http://www.vision.ee.ethz.ch/webvision/files/example_submission_testphase.zip

Competition Ends

June 7, 2019, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 pci_wzw 75.05