COCO 2017 Stuff Segmentation Challenge

Organized by nightrome - Current server time: May 23, 2018, 6:39 a.m. UTC

Previous

examples
Sept. 15, 2017, midnight UTC

Current

test-dev2017
Sept. 21, 2017, midnight UTC

Next

test-challenge2017
Sept. 21, 2017, midnight UTC

COCO 2017 Stuff Segmentation Challenge

 

The COCO 2017 Stuff Segmentation Challenge is designed to push the state of the art in semantic segmentation of stuff classes. Whereas the COCO 2017 Detection Challenge addresses thing classes (person, car, elephant), this challenge focuses on stuff classes (grass, wall, sky). For full details of this task please see the stuff evaluation page.

Things are objects with a specific size and shape, that are often composed of parts. Stuff classes are background materials that are defined by homogeneous or repetitive patterns of fine-scale properties, but have no specific or distinctive spatial extent or shape. Why the focus on stuff? Stuff covers about 66% of the pixels in COCO. It allows us to explain important aspects of an image, including scene type; which thing classes are likely to be present and their location; as well as geometric properties of the scene. The COCO 2017 Stuff Segmentation Challenge builds on the COCO-Stuff project as described on this website and in this research paper. This challenge includes and extends the original dataset release. This challenge includes and extends the original dataset release. Please note that in order to scale annotation, stuff segmentations were collected on superpixel segmentations of an image.

This challenge is part of the Joint COCO and Places Recognition Challenge Workshop at ICCV 2017. For further details about the joint workshop please visit the workshop website. Please also see the concurrent COCO 2017 Detection and Keypoint Challenges.

The challenge includes 55K COCO images (train 40K, val 5K, test-dev 5K, test-challenge 5K) with annotations for 91 stuff classes and 1 'other' class. The stuff annotations cover 38M superpixels (10B pixels) with 296K stuff regions (5.4 stuff labels per image). Annotations for train and val are now available for download, while test set annotations will remain private. We provide annotations in json and png format for easier access.

This CodaLab evaluation server provides a platform to measure performance on the val, test-dev and test-challenge sets. The COCO Stuff API is provided to compute several performance metrics to evaluate semantic segmentation.

To participate, you can find instructions on the COCO website. In particular, please see the overview, challenge description, download, format, guidelines, evaluate, and leaderboard pages for more details.

The COCO Stuff API is used to evaluate results of the Stuff Segmentation Challenge. For an overview of the relevant files, see this page. The software uses both candidate and reference segmentations, and applies the following evaluation metrics to leaf categories and super categories: Mean Intersection-Over-Union, Frequency Weighted Intersection-Over-Union, Mean Accuracy, Pixel Accuracy. More details can be found on the challenge homepage.

Please refer to COCO Terms of Use.

examples

Start: Sept. 15, 2017, midnight

Description: The two example images provided in the COCO Stuff API repository. The results file can be created by running the pngToCocoResultDemo script. Evaluation should take approximately 2 minutes.

val2017

Start: Sept. 15, 2017, midnight

Description: The val evaluation server for stuff segmentation. The submission has to include annotations for exactly 5,000 images. Evaluation should take approximately 7 minutes.

test-dev2017

Start: Sept. 21, 2017, midnight

Description: The test-dev evaluation server for stuff segmentation. The submission has to include annotations for exactly 20,288 images, 5,000 of which are currently used for evaluation. Evaluation should take approximately 7 minutes.

test-challenge2017

Start: Sept. 21, 2017, midnight

Description: The test-challenge evaluation server for stuff segmentation. The submission has to include annotations for exactly 40,670 (test-dev + test-challenge) images, 9,878 of which are currently used for evaluation. Evaluation should take approximately 30 minutes. Note that the results remain hidden - even for their author - until October 15.

Competition Ends

Oct. 9, 2017, midnight

You must be logged in to participate in competitions.

Sign In
# Username Score
1 nightrome 0.241