Beware: This is the old competition; no submissions are possible here.
SemanticKITTI is a large-scale dataset providing point-wise labels for the LiDAR data of the KITTI Vision Benchmark. It is based on the odometry task data and provides annotations for 28 classes, including labels for moving and non-moving traffic participants. Please visit www.semantic-kitti.org for more information.
In this competition, one has to provide a completed voxel grid for the test sequences 11-21. The task is to use an unlabeled voxel grid of a single scan to predict the completed voxel grid. Here, the approach needs to provide labels for each voxel of a pre-defined dimension.
Beware: This is the old competition; no submissions are possible here.
Similar to the training data, you have to provide a single zip containing a folder "sequences". The sequence folder contains sub-folders "11", "12", ..., "21", which contain a folder "predictions". There one has to provide for each scan a label file in binary format containing for each voxel an unsigned int (16-bit) with the label.
The contents of the zip-file should be organized like this:
sequences ├── 11 │ └── predictions │ ├ 000000.label │ ├ 000001.label │ ├ ... ├── 12 │ └── predictions │ ├ 000000.label │ ├ 000001.label │ ├ ... ├── 13 . . . └── 21
It is strongly recommended to use the verification script of the SemanticKITTI API (available at github), since all submissions count towards the overall maximum number of submissions.
Note: The upload of the zip file with your results takes some time and there is (unfortunately) no indicator for the status of the upload. You will just see that is being processed upon successful uploading your data.
To assess the labeling performance, we rely on the commonly applied mean Jaccard Index or mean intersection-over-union (mIoU) metric over all classes.
As the classes other-structure and other-object have either only a few points and are otherwise too diverse with a high intra-class variation, we decided to not include these classes in the evaluation. Thus, we use 25 instead of 28 classes, ignoring outlier, other-structure, and other-object during training and inference. We furthermore do not distinguish between moving or non-moving objects, which results in 19 classes.
Beware: This is the old competition; no submissions are possible here.
Only the training set is provided for learning the parameters of the algorithms. The test set should be used only for reporting the final results compared to other approaches - it must not be used in any way to train or tune systems, for example by evaluating multiple parameters or feature choices and reporting the best results obtained. Thus, we impose an upper limit (currently 10 attempts) on the number of submissions. It is the participant's responsibility to divide the training set into proper training and validation splits, e.g., we use sequence 08 for validation. The tuned algorithms should then be run - ideally - only once on the test data.
The evaluation server may not be used for parameter tuning since we rely here on a shared resource that is provided by the Codalab team and its sponsors. We ask each participant to upload the final results of their algorithm/paper submission only once to the server and perform all other experiments on the validation set. If participants would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features), this must be done on the validation data and only the best performing setting of the novel method may be submitted for evaluation to our server. If comparisons to baselines from third parties (which have not been evaluated on the benchmark website) are desired, please contact us for a discussion.
Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will delete submissions. When registering with Codalab, we ask all participants to preferably use an institutional email address (e.g., .edu) or company email address. We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com). If you need to use such an email address, then contact us to approve your account.
Our dataset is based on the KITTI Vision Benchmark and therefore we distribute the data under Creative Commons Attribution-NonCommercial-ShareAlike licence. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes.
Specifically you should cite our work:
@inproceedings{behley2019arxiv,
author = {J. Behley and M. Garbade and A. Milioto and J. Quenzel
and S. Behnke and C. Stachniss and J. Gall},
title = {{SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences}},
booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
year = {2019}}
But also cite the original KITTI Vision Benchmark:
@inproceedings{geiger2012cvpr,
author = {A. Geiger and P. Lenz and R. Urtasun},
title = {{Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite}},
booktitle = {Proc.~of the IEEE Conf.~on Computer Vision and Pattern Recognition (CVPR)},
pages = {3354--3361},
year = {2012}}
For more information, please visit our website at http://www.semantic-kitti.org/.
Beware: This is the old competition; no submissions are possible here.
Before you can submit your first results, you need to register with CodaLab and login to participate. Only then you can submit results to the evaluation server, which will score your submission on the non-public test set.
Good luck with your submission!
Start: Nov. 1, 2019, midnight
Description: Starting 08/24/20: Please ensure that you have version 1.1 of the data!
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | gjt | 0.341 |
2 | Noah_Canada | 0.295 |
3 | JS3C-Net | 0.238 |