Update! Find the new evaluation site in here.
Update! Find the final test-challenge leaderboard results in the DAVIS website.
This is the submission site for the Unsupervised 2020 DAVIS Challenge on Video Object Segmentation. You can find more details about the challenge, dataset, prizes and rules in the DAVIS website.
Important! In the test-challenge phase, the limit in the number of submissions is per team and not per user. We define a team as the group of people that would coauthor the final publication. Please, follow these instructions to create a competition team and make sure you have no profile team active (fields should be empty).
Please cite the following papers if you participate in the challenge:
@article{Caelles_arXiv_2019, author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}}, title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation}, journal = {arXiv:1905.00737}, year = {2019} }
@article{Pont-Tuset_arXiv_2017, author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}}, title = {The 2017 DAVIS Challenge on Video Object Segmentation}, journal = {arXiv:1704.00675}, year = {2017} }
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}
Methods have to provide a pool of N non-overlapping video object proposals for every video sequence i.e. a segmentation mask for each frame in the video sequence where the mask id for a certain object has to be consistent through the whole sequence. During evaluation, each of the annotated objects in the ground truth is matched with one of the N video object proposals predicted by the methods that maximize J&F using a bipartite graph matching. Note that we do not penalize if methods detect more objects than the ones annotated in the ground truth. The final J&F result is the mean of all the matched objects in all the video sequences.
More info in the DAVIS 2019 paper.
Please check the DAVIS website for details about terms and conditions.
Please cite the following papers if you participate in the challenge:
@article{Caelles_arXiv_2019, author = {Sergi Caelles and Jordi Pont-Tuset and Federico Perazzi and Alberto Montes and Kevis-Kokitsi Maninis and Luc {Van Gool}}, title = {The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation}, journal = {arXiv:1905.00737}, year = {2019} }
@article{Pont-Tuset_arXiv_2017, author = {Jordi Pont-Tuset and Federico Perazzi and Sergi Caelles and Pablo Arbel\'aez and Alexander Sorkine-Hornung and Luc {Van Gool}}, title = {The 2017 DAVIS Challenge on Video Object Segmentation}, journal = {arXiv:1704.00675}, year = {2017} }
@inproceedings{Perazzi2016,
author = {F. Perazzi and J. Pont-Tuset and B. McWilliams and L. {Van Gool} and M. Gross and A. Sorkine-Hornung},
title = {A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation},
booktitle = {Computer Vision and Pattern Recognition},
year = {2016}
}
Start: April 15, 2019, midnight
Start: May 3, 2020, 11:59 p.m.
May 15, 2020, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In