NOTE: Make sure you are aware of the new challenge rules.
For detection with bounding boxes outputs, please refer to https://competitions.codalab.org/competitions/20794.
The object detection is a fundamental problem in understanding visual scenes. To promote and measure the progress in this area, we carefully created the Common objects in Context (COCO) dataset to provide resources for training, validation, and testing of object detections.
The object detection with the segmentation mask task is part of the Joint COCO and LVIS Recognition Challenge Workshop at ECCV 2020. For further details about the joint workshop please visit the workshop page. Researchers are encouraged to participate in both the COCO and Mapillary Panoptic Segmentation Tasks (the tasks share identical data formats and evaluation metrics). Please also see the related COCO keypoint, stuff, and panoptic tasks.
Other than submitting your result in a single zipped .json file, you may split your result into multiple (3 to 5) .json files and compress them into a single .zip file for submission.
Notice that evaluation metrics are computed allowing for at most 100 top-scoring detections per image (across all categories).
For the latest competition results, please refer to the COCO detection leaderboard.
The COCO API is used to evaluate detection results. The software provides features to handle I/O of images, annotations, and evaluation results. Please visit overview for getting started and detections eval page for more evaluation details.
Start: Aug. 11, 2019, midnight
Description: The val evaluation server for *segmentation mask* detection on 5K 2017 Val Images in http://cocodataset.org/#download. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the val for performing validation experiments; for publication, please evaluate your results on test-dev. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. Results submitted to test-val will NOT be posted to the public leaderboard on cocodataset.org.
Start: Aug. 11, 2019, midnight
Description: The test-dev evaluation server for *segmentation mask* detection. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the test-dev for reporting evaluation results for publication. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. We will migrate results submitted to test-std regularly to the public leaderboard on cocodataset.org. Please choose "Submit to Leaderboard" if you want your submission to be appeared on our leaderboard. Results migrated to COCO leaderboard will be removed from the CodaLab leaderboard.
Start: June 13, 2020, midnight
Description: The test-challenge evaluation server for *segmentation mask* detection. This challenge is part of the Joint COCO and LVIS Recognition Challenge Workshop at ECCV 2020. For further details about the joint workshop please visit the workshop website at http://cocodataset.org/workshop/coco-lvis-eccv-2020.html and the challenge webpage at http://cocodataset.org/#detection-2020. Evaluation usually takes about 20 minutes.
Aug. 8, 2020, 6 a.m.
You must be logged in to participate in competitions.Sign In