VIPriors Action Recognition Challenge

Organized by rjbruin - Current server time: April 3, 2020, 5:37 a.m. UTC

Previous

Development (validation set)
March 2, 2020, midnight UTC

Current

Competition (test set)
March 10, 2020, 11 p.m. UTC

End

Competition Ends
July 3, 2020, 10:59 p.m. UTC

VIPriors Action Recognition challenge

We present the "Visual Inductive Priors for Data-Efficient Computer Vision" challenges. We offer four challenges, where models are to be trained from scratch, and we reduce the number of training samples to a fraction of the full set. The winners of each challenge are invited to present their winning method at the VIPriors workshop presentation at ECCV 2020.

This challenge is the VIPriors Action Recognition Challenge. In this particular challenge, the task is Action Recognition and the original dataset is UCF101. The training set consists of ~4.8K clips.

Please note that this challenge does not allow using any pre-trained checkpoint, including any pre-trained backbone! To warrant the competitive integrity of the competition competitive participants may expect a request to share their code with the organizers for a reproducability study.

The winners of this challenge will get an opportunity to present their method at the VIPriors workshop at ECCV 2020. The organizers will contact contenders that are eligible for this opportunity after the challenges close.

Data

As training data for these challenges we use subsets of publicly available datasets. We do not directly provide the data but instead expose tooling to generate the subsets from the canonical versions of the publicly available full datasets through our toolkit. Please refer to "Resources" below for details for details.

Resources

To accommodate submissions to the challenges we provide a toolkit that contains:

  • Python tools for generating the appropriate training and validation data;
  • documentation of the required submission format for the challenges;
  • implementations of the baseline models for each challenge.

See the GitHub repository of the toolkit here.

Evaluation Criteria

The task will be evaluated using the classification accuracy on the test set, as in the original dataset. The winner of the challenge will be determined with the highest Top-1 accuracy. However, as extra information of the models, we will also compute the Top-3 and Top-5 accuracy.

Please refer to the challenge toolkit for more details and tools to generate valid submissions. Don't forget to zip your submission file as CodaLab only takes ZIP archives as submissions.

Terms and Conditions

  • We prohibit the use of other data than the provided training data, i.e., no pre-training, no transfer learning. This includes pre-trained backbones.
  • Top contenders in the challenge may be required to submit their submissions to organizers for peer review to ensure reproducability and competitive integrity. The organizers will contact contenders when necessary after the challenges close.
  • Organizers retain the right to disqualify any submissions that violate these rules.

Development (validation set)

Start: March 2, 2020, midnight

Description: Use this phase for debugging your submission. Your submissions are evaluated against the validation set. Don't forget to zip your submission file as CodaLab only takes ZIP archives as submissions.

Competition (test set)

Start: March 10, 2020, 11 p.m.

Description: Don't forget to zip your submission file as CodaLab only takes ZIP archives as submissions.

Competition Ends

July 3, 2020, 10:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 rjbruin 0.01