COCO Detection Challenge (Segmentation Mask)

Organized by richardaecn - Current server time: Oct. 22, 2019, 6:04 a.m. UTC

Previous

test-challenge2019 (segm)
Aug. 11, 2019, midnight UTC

Current

test-challenge2019 (segm)
Aug. 11, 2019, midnight UTC

End

Competition Ends
Oct. 5, 2019, 6:59 a.m. UTC

COCO Object Detection (Segmentation Mask)

 

NOTE: Make sure you are aware of the new challenge rules. 

For detection with bounding boxes outputs, please refer to https://competitions.codalab.org/competitions/20794.

The object detection is a fundamental problem in understanding visual scenes. To promote and measure the progress in this area, we carefully created the Common objects in Context (COCO) dataset to provide resources for training, validation, and testing of object detections.

The object detection with the segmentation mask task is part of the Joint COCO and Mapillary Recognition Challenge Workshop at ICCV 2019. For further details about the joint workshop please visit the workshop page. Researchers are encouraged to participate in both the COCO and Mapillary Panoptic Segmentation Tasks (the tasks share identical data formats and evaluation metrics). Please also see the related COCO keypointstuff, and panoptic tasks.

To participate the challenge, you can find instructions on the COCO website. In particular, please see the overviewdownloadformat and detections eval pages for more details.

Other than submitting your result in a single zipped .json file, you may split your result into multiple (3 to 5) .json files and compress them into a single .zip file for submission.

Notice that evaluation metrics are computed allowing for at most 100 top-scoring detections per image (across all categories).

For the latest competition results, please refer to the COCO detection leaderboard.

The COCO API is used to evaluate detection results. The software provides features to handle I/O of images, annotations, and evaluation results. Please visit overview for getting started and detections eval page for more evaluation details.

Please refer to COCO Terms of Use.

val2017 (segm)

Start: Aug. 11, 2019, midnight

Description: The val evaluation server for *segmentation mask* detection on 5K 2017 Val Images in http://cocodataset.org/#download. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the val for performing validation experiments; for publication, please evaluate your results on test-dev. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. Results submitted to test-val will NOT be posted to the public leaderboard on cocodataset.org.

test-dev2019 (segm)

Start: Aug. 11, 2019, midnight

Description: The test-dev evaluation server for *segmentation mask* detection. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the test-dev for reporting evaluation results for publication. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. We will migrate results submitted to test-std regularly to the public leaderboard on cocodataset.org. Please choose "Submit to Leaderboard" if you want your submission to be appeared on our leaderboard. Results migrated to COCO leaderboard will be removed from the CodaLab leaderboard.

test-challenge2019 (segm)

Start: Aug. 11, 2019, midnight

Description: The test-challenge evaluation server for *segmentation mask* detection. This challenge is part of the Joint COCO and Mapillary Recognition Challenge Workshop at ICCV 2019. For further details about the joint workshop please visit the workshop website at http://cocodataset.org/workshop/coco-mapillary-iccv-2019.html and the challenge webpage at http://cocodataset.org/#detection-2019. Evaluation usually takes about 20 minutes.

Competition Ends

Oct. 5, 2019, 6:59 a.m.

You must be logged in to participate in competitions.

Sign In
# Username Score
1 youtube_test 0.47
2 ShuchunLiu 0.46
3 WYQ 0.46