The 2nd Large-scale Video Object Segmentation Challenge - Track 1: Video Object Segmentation

Organized by fyc0624 - Current server time: Oct. 22, 2019, 5:43 a.m. UTC

Previous

Testing
Aug. 15, 2019, midnight UTC

Current

Development
June 1, 2019, midnight UTC

End

Competition Ends
Aug. 30, 2019, 11:59 p.m. UTC

The 2nd Large-scale Video Object Segmentation Challenge - Track 1: Video Object Segmentation

Introduction

Video object segmentation has been studied extensively in the past decade due to its importance in understanding video spatial-temporal structures as well as its value in industrial applications. Recently, data-driven algorithms (e.g. deep learning) have become the dominant approach to computer vision problems and one of the most important keys to their successes is the availability of large-scale datasets. Last year, we presented the first large-scale video object segmentation dataset named YouTubeVOS and hosted the 1st Large-scale Video Object Segmentation Challenge in conjuction with ECCV 2018. This year, we are thrilled to invite you to the 2nd Large-scale Video Object Segmentation Challenge in conjunction with ICCV 2019. The benchmark would be an augmented version of the YouTubeVOS dataset with more annotations. Some incorrect annotations are also corrected. For more details, check our website for the workshop and challenge.

In this workshop in conjunction with a competition, we will present the first large-scale dataset for video object segmentation, which would allow participant teams to try novel and bold ideas that could not succeed with previous small-scale datasets. In contrast to previous video-object-segmentation datasets, our dataset has the following advantages:

  • Our dataset contains 4000+ high-resolution videos clips, which are downloaded from YouTube and contain diverse contents. It is more than 30 times larger than the existing largest dataset (i.e. DAVIS) for video object segmentation.
  • Our dataset consists of a total of 94 object categories which cover common objects such as animals, vehicles, accessories and persons in different activities.
  • The videos in our dataset are taken by both amateurs and professionals. Therefore, in addition to various object motion, there is frequently significant camera motion.
  • Our segmentation masks are carefully labeled by human annotators to ensure high quality

 

We expect that our new dataset shall bring new possibilities of generating novel ideas for dense-prediction video tasks as well as providing a more comprehensive evaluation methodologies for video segmentation technology.

Timetable

 

  • Sep 5th: The final competition results will be announced and high-performance teams will be invited to give oral/poster presentations at our ICCV 2019 workshop.
  • Aug 15th-30th: Release the test dataset and open the submission of the test results.
  • Jun 1st: Setup the submission server on CodaLab and open the submission of the validation results.
  • May 20th: Release the training and validation dataset.

 

Task

The challenge task is the semi-supervised video object segmentation, which targets at segmenting a particular object isntance throughout the entire video sequence given only the object mask of the first frame. Different from previous video object segmentation challenges in CVPR 2017 and 2018, we will provide much larger scale of training and test data to foster various kinds of algorithms. In addition, our test dataset will have unseen categories which do not exist in the training dataset, in order to evaluate the generalization ability of algorithms.

Dataset

Our dataset contains three subsets.

  • Training: 3471 video sequences with densely-sampled multi-object annotations. Each object is annotated with a category name, there is 65 categories in training set.
  • Validation: 507 video sequences with the first-frame annotations. It includes objects from the 65 training categories, and 26 unseen categories in training.
  • Test: Another 541 sequences with the first-frame annotations. It includes objects from the 65 training categories, and 29 unseen categories in training.
  • RGB images and annotations for the labeled frames will be provided. We will also provide a download link for all image frames. Evaluation of validation and test sets will be done by uploading results to our evaluation server. Category information for validation and test sets will not be released.

 

Evaluation Criteria

Similar to a previous video object segmentation challenge DAVIS, we will be using Region Jaccard (J) and Boundary F measure (F) as evaluation metric. The overall ranking measures will be computed in the following way:

  • 1. compute J and F for both seen and unseen categories, averaged over all corresponding objects.
  • 2. the final score is the average of the four scores: J for seen categories, F for seen categories, J for unseen categories, and F for unseen categories.

Note that we have some of the objects start appearing from the middle of videos, we will only compute the metrics from the first occurence of these objects to the end of the video.

 

Terms and Conditions

The annotations in this dataset belong to the organizers of the challenge and are licensed under a Creative Commons Attribution 4.0 License.

The data is released for non-commercial research purpose only.

The organizers of the dataset as well as their employers make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the organizers, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted videos that he or she may create from the Database. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions. The organizers reserve the right to terminate Researcher's access to the Database at any time. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.

Development

Start: June 1, 2019, midnight

Testing

Start: Aug. 15, 2019, midnight

Competition Ends

Aug. 30, 2019, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 zxyang1996 0.824
2 theodoruszq 0.822
3 zszhou 0.820