SemanticKITTI: Semantic Segmentation

Organized by jbehley - Current server time: Dec. 5, 2019, 5:18 p.m. UTC

Previous

Single Scan
July 1, 2019, midnight UTC

Current

Multiple Scans
July 1, 2019, midnight UTC

End

Competition Ends
Never

SemanticKITTI: Semantic Segmentation

SemanticKITTI is a large-scale dataset providing point-wise labels for the LiDAR data of the KITTI Vision Benchmark. It is based on the odometry task data and provides annotations for 28 classes, including labels for moving and non-moving traffic participants. Please visit www.semantic-kitti.org for more information.

In this competition, one has to provide labels for each point of the test sequences 11-21. Therefore, the input to all evaluated methods is a list of coordinates of the three-dimensional points along with their remission, i.e., the strength of the reflected laser beam which depends on the properties of the surface that was hit. Each method should then output a label for each point of a scan, i.e., one full turn of the rotating LiDAR sensor.

Evaluation

Data Format

Similar to the training data, you have to provide a single zip containing a folder "sequences". The sequence folder contains sub-folders "11", "12", ..., "21", which contain a folder "predictions". There one has to provide for each scan a label file in binary format containing for each point an unsigned int (32-bit) with the label.

The contents of the zip-file should be organized like this:

    sequences
    ├── description.txt (optional)
    ├── 11
    │   └── predictions
    │         ├ 000000.label
    │         ├ 000001.label
    │         ├ ...
    ├── 12
    │   └── predictions
    │         ├ 000000.label
    │         ├ 000001.label
    │         ├ ...
    ├── 13
    .
    .
    .
    └── 21

Please include a description.txt file with the following content:

method name: 
method description: 
project url: 
publication url: 
bibtex: 
organization or affiliation: 

The description.txt is currently our only way of getting meta information about the approach. We are working on a solution to the problem, which would enable us to get this information from Codalab. All information is optional and if you need anonymity in case of double-blind submission, feel free to leave the description.txt empty.

It is strongly recommended to use the verification script of the SemanticKITTI API (available at github), since all submissions count towards the overall maximum number of submissions.

Important: Select the appropriate "phase" for your method to get the appropriate final result averaged over the correct number of classes.

Note: The upload of the zip file with your results takes some time and there is (unfortunately) no indicator for the status of the upload. You will just see that is being processed upon successful uploading your data.

Evaluation Criterion

To assess the labeling performance, we rely on the commonly applied mean Jaccard Index or mean intersection-over-union (mIoU) metric over all classes.

As the classes other-structure and other-object have either only a few points and are otherwise too diverse with a high intra-class variation, we decided to not include these classes in the evaluation. Thus, we use 25 instead of 28 classes, ignoring outlier, other-structure, and other-object during training and inference.

Single scan

Furthermore, we cannot expect to distinguish moving from non-moving objects with a single scan, since this Velodyne LiDAR cannot measure velocities like radars exploiting the Doppler effect. We, therefore, combine the moving classes with the corresponding non-moving class resulting in a total number of 19 classes for evaluation.

Multiple scans

With multiple scans, we evaluate all 28 classes including moving and non-moving traffic participants.

Terms and Conditions

Creative Commons License

Our dataset is based on the KITTI Vision Benchmark and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike licence. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes.

Specifically you should cite our work:

  @inproceedings{behley2019arxiv,
      author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel 
                  and S. Behnke and C. Stachniss and J. Gall},
      title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
      booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
      year = {2019}}

But also cite the original KITTI Vision Benchmark:

  @inproceedings{geiger2012cvpr,
      author = {A. Geiger and P. Lenz and R. Urtasun},
      title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
      booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
      pages = {3354--3361},
      year = {2012}}

For more information, please visit our website at http://www.semantic-kitti.org/.

How to Participate

Before you can submit your first results, you need to register with CodaLab and login to participate. Only then you can submit results to the evaluation server, which will score your submission on the non-public test set.

Steps

  1. Prepare your submission in the required format, as described under the Evaluation section. CodeLab expects you to upload a single zip.
  2. Use the validation script from the semantic-kitti-api to ensure that the folder structure and number of label files in the zip file is correct. All submissions count towards the overall maximum number of submissions!
  3. Go to Participate and the Submit / View Results page.
  4. Select the appropriate phase, i.e., Single Scan or Multiple Scan, for which you computed the results.
  5. Enter the required fields, where you can supply also later more details, if you need to take care of anonymity in case of double blind submissions.
  6. Then you have to click "Submit" in the lower part of the page, which will open a file dialog. In the file dialog, you have to select your submission zip file, which will be then uploaded.
    Important: Don't close the window or tab until you see that a row has been added in the table under the "submit" button.
  7. The evaluation takes roughly 10 minutes to complete and you will have the choice, which of your submission gets added to the leaderboard.

Good luck with your submission!

Single Scan

Start: July 1, 2019, midnight

Description: Single Scan Evaluation (Important: Uploading your results takes some time. Do not close the window before you see the status of your submission!)

Multiple Scans

Start: July 1, 2019, midnight

Description: Multiple Scan Evaluation (Important: Uploading your results takes some time. Do not close the window before you see the status of your submission!)

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 hugues.thomas 0.512
2 SpSN 0.431
3 dante0shy 0.420