COCO Detection Challenge

Organized by richardaecn - Current server time: Nov. 14, 2018, 11:02 a.m. UTC

Previous

test-challenge2018 (segm)
July 18, 2018, midnight UTC

Current

val2018 (segm)
Sept. 5, 2017, midnight UTC

End

Competition Ends
Aug. 18, 2018, 6:59 a.m. UTC

COCO Object Detection

 

The object detection is a fundamental problem in understanding visual scenes. To promote and measure the progress in this area, we carefully created the Common objects in Context (COCO) dataset to provide resources for training, validation, and testing of object detections.

The object detection task is part of the Joint COCO and Mapillary Recognition Challenge Workshop at ECCV 2018. For further details about the joint workshop please visit the workshop page. Researchers are encouraged to participate in both the COCO and Mapillary Panoptic Segmentation Tasks (the tasks share identical data formats and evaluation metrics). Please also see the related COCO keypoint, stuff and panoptic tasks.

To participate the challenge, you can find instructions on the COCO website. In particular, please see the overview, download, format and detections eval pages for more details.

Other than submitting your result in a single zipped .json file, you may split your result into multiple (3 to 5) .json files and compress them into a single .zip file for submission.

Notice that evaluation metrics are computed allowing for at most 100 top-scoring detections per image (across all categories).

For the latest competition results, please refer to the COCO detection leaderboard.

The COCO API is used to evaluate detection results. The software provides features to handle I/O of images, annotations, and evaluation results. Please visit overview for getting started and detections eval page for more evaluation details.

Please refer to COCO Terms of Use.

test-dev2018 (bbox)

Start: Jan. 1, 2015, midnight

Description: The test-dev2018 evaluation server for *bounding box* detection. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the test-dev for reporting evaluation results for publication. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. We will migrate results submitted to test-std regularly to the public leaderboard on cocodataset.org. Please choose "Submit to Leaderboard" if you want your submission to be appeared on our leaderboard.

test-challenge2017 (bbox)

Start: Sept. 5, 2017, midnight

Description: The detection task with bounding box output will NOT be featured at the COCO 2018 challenge. More details can be found at http://cocodataset.org/#detection-2018 .

test-dev2018 (segm)

Start: Jan. 1, 2015, midnight

Description: The test-dev2018 evaluation server for *segmentation mask* detection. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the test-dev for reporting evaluation results for publication. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. We will migrate results submitted to test-std regularly to the public leaderboard on cocodataset.org. Please choose "Submit to Leaderboard" if you want your submission to be appeared on our leaderboard.

test-challenge2018 (segm)

Start: July 18, 2018, midnight

Description: The test-challenge2018 evaluation server for *segmentation mask* detection. This challenge is part of the Joint COCO and Mapillary Recognition Challenge Workshop at ECCV 2018. For further details about the joint workshop please visit the workshop website at http://cocodataset.org/workshop/coco-mapillary-eccv-2018.html and the challenge webpage at http://cocodataset.org/#detection-2018. Evaluation usually takes about 20 minutes.

val2018 (bbox)

Start: Sept. 5, 2017, midnight

Description: The val2018 evaluation server for *bounding box* detection on 5K 2017 Val Images in http://cocodataset.org/#download. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the test-val for performing validation experiments; for publication, please evaluate your results on test-dev. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. Results submitted to test-val will NOT be posted to the public leaderboard on cocodataset.org.

val2018 (segm)

Start: Sept. 5, 2017, midnight

Description: The val2018 evaluation server for *segmentation mask* detection on 5K 2017 Val Images in http://cocodataset.org/#download. Evaluation usually takes about 10 minutes; please see forums for troubleshooting submissions. We encourage use of the test-val for performing validation experiments; for publication, please evaluate your results on test-dev. You can access the latest public results for comparison at http://cocodataset.org/#detections-leaderboard. Results submitted to test-val will NOT be posted to the public leaderboard on cocodataset.org.

Competition Ends

Aug. 18, 2018, 6:59 a.m.

You must be logged in to participate in competitions.

Sign In