NOTE: Make sure you are aware of the new challenge rules.
We are pleased to introduce the COCO Panoptic Segmentation Task with the goal of advancing the state of the art in scene segmentation. Panoptic segmentation addresses both stuff and thing classes, unifying the typically distinct semantic and instance segmentation tasks. The aim is to generate coherent scene segmentations that are rich and complete, an important step toward real-world vision systems such as in autonomous driving or augmented reality. For full details of the panoptic segmentation task please see the panoptic evaluation page.
In a bit more detail: things are countable objects such as people, animals, tools. Stuff classes are amorphous regions of similar texture or material such as grass, sky, road. Previous COCO tasks addressed stuff and thing classes separately, see the instance segmentation and stuff segmentation tasks, respectively. To encourage the study of stuff and things in a unified framework, we introduce the COCO Panoptic Segmentation Task. The definition of 'panoptic' is "including everything visible in one view", in our context panoptic refers to a unified, global view of segmentation. The panoptic segmentation task involves assigning a semantic label and instance id for each pixel of an image, which requires generating dense, coherent scene segmentations. The stuff annotations for this task come from the COCO-Stuff project described in this paper. For more details about the panoptic task, including evaluation metrics, please see the panoptic segmentation paper.
The panoptic segmentation task is part of the Joint COCO and LVIS Recognition Challenge Workshop at ECCV 2020. For further details about the joint workshop please visit the workshop page. Researchers are encouraged to participate in both the COCO and Mapillary Panoptic Segmentation Tasks (the tasks share identical data formats and evaluation metrics). Please also see the related COCO detection, keypoint, and stuff tasks.
The panoptic task uses all the annotated COCO images and includes the 80 thing categories from the detection task and a subset of the 91 stuff categories from the stuff task, with any overlaps resolved. The Panoptic Quality (PQ) metric is used for performance evaluation, for details see the panoptic evaluation page.
This CodaLab evaluation server provides a platform to measure performance on the val, test-dev and test-challenge sets. The COCO Panoptic API is provided to compute several performance metrics to evaluate panoptic segmentation.
To participate, you can find instructions on the COCO website. In particular, please see the overview, challenge description, download, data format, results format, guidelines, upload and evaluate pages for more details.
The COCO Panoptic API is used to evaluate results of the Stuff Segmentation Challenge. More details can be found on the challenge homepage.
Please refer to COCO Terms of Use.
Start: June 30, 2018, midnight
Description: The example with two images provided in COCO Panoptic API repository. Submission files can be created by running python format_converter.py --source_folder ./sample_data/panoptic_examples_2ch_format/ --images_json_file ./sample_data/panoptic_examples.json --segmentations_folder ./sample_data/panoptic_prediction/ --predictions_json_file ./sample_data/panoptic_prediction.json
Start: June 30, 2018, midnight
Description: The val evaluation server for panoptic segmentation. The submission has to include annotations for exactly 5,000 images from val set. Evaluation should take approximately 2 minutes. We encourage use of the val for performing validation experiments; for publication, please evaluate your results on test-dev.
Start: June 30, 2018, midnight
Description: The test-dev evaluation server for panoptic segmentation. The submission has to include annotations for test-dev set. Evaluation should take approximately 5 minutes. You can access the latest public results for comparison at http://cocodataset.org/#panoptic-leaderboard. We will migrate results submitted to test-dev regularly to the public leaderboard on cocodataset.org. Please choose "Submit to Leaderboard" if you want your submission to be appeared on our leaderboard.
Start: June 13, 2020, midnight
Description: The test-challenge evaluation server for panoptic segmentation. The submission has to include annotations for both test-dev and test-challenge sets. Evaluation should take approximately 10 minutes. Note that the results remain hidden - even for their author until the ICCV workshop. If you submit multiple entries, the best results based on test-dev PQ is selected as your entry for the competition.
Aug. 8, 2020, 6:59 a.m.
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | ideacvr666 | 0.595 |
2 | kmaxdeeplab | 0.585 |
3 | bc.ifp.uiuc | 0.583 |