The Unsupervised DAVIS Challenge on Video Object Segmentation @ CVPR 2019

Organized by scaelles - Current server time: Dec. 5, 2019, 5:37 p.m. UTC

Previous

Test-challenge
May 12, 2019, 11:59 p.m. UTC

Current

Test-dev
April 15, 2019, midnight UTC

End

Competition Ends
June 14, 2019, 11:59 p.m. UTC

Welcome to the 2019 Unupervised DAVIS Challenge!

This is the submission site for the Unsupervised 2019 DAVIS Challenge on Video Object Segmentation. You can find more details about the challenge, dataset, prizes and rules in the DAVIS website.

Important! In the test-challenge phase, the limit in the number of submissions is per team and not per user. We define a team as the group of people that would coauthor the final publication. Please, follow these instructions  to create a competition team and make sure you have no profile team active (fields should be empty).


Please cite the following papers if you participate in the challenge:

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv:1905.00737},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}

Evaluation Criteria

Methods have to provide a pool of N non-overlapping video object proposals for every video sequence i.e. a segmentation mask for each frame in the video sequence where the mask id for a certain object has to be consistent through the whole sequence. During evaluation, each of the annotated objects in the ground truth is matched with one of the N video object proposals predicted by the methods that maximize J&F using a bipartite graph matching. Note that we do not penalize if methods detect more objects than the ones annotated in the ground truth. The final J&F result is the mean of all the matched objects in all the video sequences.

More info in the DAVIS 2019 paper.

Terms and Conditions

Please check the DAVIS website for details about terms and conditions.

Please cite the following papers if you participate in the challenge:

@article{Caelles_arXiv_2019,
  author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}},
  title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation},
  journal = {arXiv:1905.00737},
  year = {2019}
}
@article{Pont-Tuset_arXiv_2017,
  author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}},
  title = {The 2017 DAVIS Challenge on Video Object Segmentation},
  journal = {arXiv:1704.00675},
  year = {2017}
}
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}

Test-dev

Start: April 15, 2019, midnight

Test-challenge

Start: May 12, 2019, 11:59 p.m.

Competition Ends

June 14, 2019, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 janysunny 0.456
2 Ali2500 0.429