VIPriors Object Detection Challenge

Organized by rjbruin - Current server time: Oct. 25, 2020, 8:18 p.m. UTC

First phase

Development (validation set)
March 2, 2020, midnight UTC

End

Competition Ends
July 10, 2020, 10:59 p.m. UTC

VIPriors Object Detection challenge

We present the "Visual Inductive Priors for Data-Efficient Computer Vision" challenges. We offer four challenges, where models are to be trained from scratch, and we reduce the number of training samples to a fraction of the full set. The winners of each challenge are invited to present their winning method at the VIPriors workshop presentation at ECCV 2020.

This challenge is the object detection challenge. The task is to optimize for high Average Precision on a subset of the COCO 2017 object detection dataset. We provide a subset of ~6k images to train on.

Please note that this challenge does not allow using any pre-trained checkpoint, including any pre-trained backbone! To warrant the competitive integrity of the competition competitive participants may expect a request to share their code with the organizers for a reproducability study.

The winners of this challenge will get an opportunity to present their method at the VIPriors workshop at ECCV 2020. The organizers will contact contenders that are eligible for this opportunity after the challenges close.

Report

For the submission on CodaLab to qualify for the competition, we require the authors to submit a technical report of at least three pages about the submission. Please refer to the workshop website for details.

Data

As training data for these challenges we use subsets of publicly available datasets. We do not directly provide the data but instead expose tooling to generate the subsets from the canonical versions of the publicly available full datasets through our toolkit. Please refer to "Resources" below for details for details.

Resources

To accommodate submissions to the challenges we provide a toolkit that contains:

  • Python tools for generating the appropriate training and validation data;
  • documentation of the required submission format for the challenges;
  • implementations of the baseline models for each challenge.

See the GitHub repository of the toolkit here.

Questions

If you have any questions, please first refer to the FAQ in the toolkit repository. If your question is not answered you can ask it in the challenge forums.

Evaluation Criteria

Submissions will be evaluated using the COCO API. Upload your submission as a ZIP-file containing only the file submission.json. The format of the file must match the COCO object detection results format.

Please refer to the challenge toolkit for more details and tools to generate valid submissions. Don't forget to zip your submission file as CodaLab only takes ZIP archives as submissions.

Terms and Conditions

  • We prohibit the use of other data than the provided training data, i.e., no pre-training, no transfer learning. This includes pre-trained backbones.
  • Top contenders in the challenge may be required to submit their submissions to organizers for peer review to ensure reproducability and competitive integrity. The organizers will contact contenders when necessary after the challenges close.
  • Organizers retain the right to disqualify any submissions that violate these rules.

Development (validation set)

Start: March 2, 2020, midnight

Description: Use this phase for debugging your submission. Your submissions are evaluated against the validation set. Don't forget to zip your submission file as CodaLab only takes ZIP archives as submissions.

Competition (test set)

Start: March 2, 2020, midnight

Description: Don't forget to zip your submission file as CodaLab only takes ZIP archives as submissions.

Competition Ends

July 10, 2020, 10:59 p.m.

You must be logged in to participate in competitions.

Sign In