Pascal Semantic Part Segmentation Challenge

Organized by mjhucla - Current server time: Nov. 14, 2018, 11:46 a.m. UTC

Current

Test
June 1, 2015, midnight UTC

End

Competition Ends
Never

Welcome to the Pascal Semantic Part Segmentation Challenge

PSPS icon

Recent advance on object-level visual recognition tasks (e.g. object detection and segmentation has inspired the research interests in studying the semantic parts of objects. To encourage the research in this area, we create the Pascal Semantic Part dataset, which augments the PASCAL VOC 2010 dataset with binary masks for semantic parts.

We set up this evaluation server with a part segmentation challenge, which provides a platform for researchers to use the dataset and test their approaches. Currently, the evaluation server focuses on 7 articulated categories.

Please see details of the data and toolkit for this challenge in the evaluation page and the paper PASCAL Semantic Part: Dataset and Benchmark.

Please consider citing the following papers if you are using the dataset or the toolkit:
@article{????,
title={PASCAL Semantic Part: Dataset and Benchmark},
author={????},
journal={arXiv preprint arXiv:????},
year={2015}
}
@InProceedings{chen2014detect,
author = {Chen, Xianjie and Mottaghi, Roozbeh and Liu, Xiaobai and Fidler, Sanja and Urtasun, Raquel and Yuille, Alan},
title = {Detect What You Can: Detecting and Representing Objects using Holistic Models and Body Parts},
booktitle = {CVPR},
year = {2014},
}
@InProceedings{wang2015joint,
author = {Wang, Peng and Shen, Xiaohui and Lin, Zhe and Cohen, Scott and Price, Brian and Yuille, Alan},
title = {Joint Object and Part Segmentation using Deep Learned Potentials},
booktitle = {ICCV},
year = {2015},
}

Data Description and Evaluation

This challenge focuses on 7 articulated categories: bird, cat, cow, dog, horse, person, sheep. Although it is challenging to deal with the variation of the parts (e.g. pose, shape) for these categories, the part definitions of these categories have no ambiguity.

How to download and use the dataset

  1. Visit the PASCAL Part Dataset webpage, read the instructions and download the data.
  2. Download the toolkit. The toolkit contains functions that transfer the original PASCAL Part Dataset data into the format of the challenge.
  3. Follow the instructions in the toolkit.

How to submit your results

  1. Register a CodaLab account and participate in the competition.
  2. Use your method to generate the set of label images generated by your method in the PNG format (e.g. 2008_000006.png, 2008_000024.png, etc.). The label format should be consistent with the groundtruth label format. You should output all the images in the test set list in the toolkit Toolkit/ImageSets/part_test.txt.
  3. Compress the PNG files into a "submission.zip" without their parent directory.
  4. Submit this file to the evaluation server. Please write the details of your method in the textbox before you submit your results. We also encourage your to write the bibtex or links of your paper and project page.
  5. When server returns you the results, you can choose to submit it to leaderboard which is publicly accessible.

Please see more details about the statistics and organization of parts and categories in the paper: PASCAL Semantic Part: Dataset and Benchmark.

Terms and Conditions

The part annotations in Pascal Part dataset belong to the Center for Cognition, Vision and Learning (CCVL) and are licensed under a Creative Commons Attribution 4.0 License.

Please see the terms and conditions of images and other annotations in the Pascal VOC dataset page.

Test

Start: June 1, 2015, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 mjhucla 44.880