The DAVIS Challenge on Video Object Segmentation @ CVPR 2018 Forum

Go back to competition Back to thread list Post in this thread

> How to evaluate the test results?

If I understand correctly, the DAVIS17 test annotations are not provided, and the provided python & matlab code is supposed to help us to evaluate our test results(creates a file) , which we'll submit for the competition? But the python code seems unable to evaluate without the provided ground truths. I've obtained merged results and ran "python python/tools/eval.py -i {path to my results} -o results_17_1.yaml --year 2017 --phase testdev" and got "Exception: Incorrect frames number for sequence 'aerobatics': found 1, expected 71", since there's only 1 frame of the ground truth provided for the sequence "Aerobatics". What am I doing wrong evaluating the results?

Thanks!

Posted by: alextheengineer @ Jan. 15, 2018, 8:21 p.m.

Hello alextheengineer,

Sorry for the late reply. I see that you already have some submissions to the leader board so I guess you figure it out. For future reference, using the Matlab or python packages you will be able to evaluate the train or validation sets (the only ones with the ground truth mask publicly available). In order to evaluate the test-dev or test-challenge (when the competition is opened), you have to submit your results to Codalab. In the Matlab package there is the script "helpers/create_submission_zip.m" that will help you to create the appropriate format for submission.

Best,
scaelles

Posted by: scaelles @ March 3, 2018, 6:08 p.m.
Post in this thread