The Semi-Supervised DAVIS Challenge on Video Object Segmentation @ CVPR 2019

Organized by scaelles - Current server time: June 19, 2019, 2:51 p.m. UTC

Previous

Test-challenge
May 12, 2019, 11:59 p.m. UTC

Current

Test-dev
March 1, 2018, midnight UTC

End

Competition Ends
May 24, 2019, 11:59 p.m. UTC

Welcome to the 2019 Semi-Supervised DAVIS Challenge!

This is the submission site for the Semi-Supervised 2019 DAVIS Challenge on Video Object Segmentation. You can find more details about the challenge, dataset, prizes and rules in the DAVIS website.

Important! In the test-challenge phase, the limit in the number of submissions is per team and not per user. We define a team as the group of people that would coauthor the final publication. Please, follow these instructions  to create a competition team and make sure you have no profile team active (fields should be empty).


Please cite the following papers if you participate in the challenge:

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv:1905.00737},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}

Evaluation Criteria

In the semi-supervised task, the segmentation of the first frame is given to the methods. Even though the class of each object was recently released, it should not be used as additional information in the methods.

The evaluation of the results will be the mean over all objects for two different metrics: Region Similarity (J) and Contour Accuracy (F). The final metric will be the mean of both metrics. Both measures were presented in the original CVPR 2016 DAVIS paper.

The code used to evaluate can be found here.  Please, upload a zip file with the sequence folders directly inside. When doing 'unzip your_results.zip', a folder for each sequences containing the results should be unzipped. The frames for each sequence should be in indexed PNG format.

Terms and Conditions

Please check the DAVIS website for details about terms and conditions.

Please cite the following papers if you participate in the challenge:

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv:1905.00737},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}

Test-dev

Start: March 1, 2018, midnight

Test-challenge

Start: May 12, 2019, 11:59 p.m.

Competition Ends

May 24, 2019, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 jieson_zheng 0.734
2 swoh 0.722
3 Jono 0.716