Collocated with ECCV 2022
SensatUrban is an urban-scale photogrammetric point cloud dataset with nearly three billion richly annotated points. This dataset consists of large areas from two UK cities, covering about 6 km2 of the city landscape. In the dataset, each 3D point is labeled as one of 13 semantic classes such as ground, vegetation, car, etc... Please refer to our paper and website for details.
In this competition, one has to provide labels for each point of the test splits of the dataset. Therefore, the input to all evaluated methods is a list of coordinates of the three-dimensional points along with their appearance, i.e., the RGB value of each point. Each method should then output a label for each point, this is used for the final performance evaluation.
For fairness, all participants can only use the released SensatUrban dataset to train networks. It is not allowed to pretrain the models on any other public or private datasets. In case the unlabelled testing split is used during training, the participant should clearly specify the experimental settings in submission.
For other users who do not participate in the challenge, it is free to use our dataset in combination with others for their own research purpose.
We are thankful to USC-ICT to sponsor the following prizes. The prize award will be granted to the Top 3 individuals and teams on the leaderboard that provides a valid submission.
|
$1,500 USD |
|
$1,000 USD |
|
$500 USD |
If you find our work useful in your research, please consider citing:
@inproceedings{hu2020towards,
title={Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset, Benchmarks and Challenges},
author={Hu, Qingyong and Yang, Bo and Khalid, Sheikh and Xiao, Wen and Trigoni, Niki and Markham, Andrew},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2021}
}
You have to provide a single zip containing the label.
The contents of the zip-file should be organized like this:
zip ├── description.txt (optional) ├── birmingham_block_2.label ├── birmingham_block_8.label ├── cambridge_block_15.label ├── cambridge_block_16.label ├── cambridge_block_22.label └── cambridge_block_27.label
Please include a description.txt file with the following content:
method name: method description: project url: publication url: bibtex: organization or affiliation: email:
Important: Submitting the description.txt is required to get your result evaluated by our server. Submissions without method descriptions will not be considered in the final competition, not eligible for the prize award, and the result will be removed from the leaderboard. If the approach has been previously published, please include the publication URL, a detailed description of any improvements made, and the parameters used in this competition. Please also include any data augmentation techniques used, challenges, and issues you were facing.
It is strongly recommended to use the verification script of the SensatUrban API (available at github), since all submissions count towards the overall maximum number of submissions.
Important: Select the appropriate "phase" for your method to get the appropriate final result averaged over the correct number of classes.
Note: The upload of the zip file with your results takes some time and there is (unfortunately) no indicator for the status of the upload. You will just see that is being processed upon successful uploading your data.
To assess the labeling performance, we rely on the commonly applied mean Jaccard Index or mean intersection-over-union (mIoU) metric over all classes.
We use a total of 13 semantic classes during training and testing, including ground, vegetation, building, wall, bridge, parking, rail, car, footpath, bike, water, traffic road, and street furniture.
Only the training set is provided for learning the parameters of the algorithms. The test set should be used only for reporting the final results compared to other approaches - it must not be used in any way to train or tune systems, for example, by evaluating multiple parameters or feature choices and reporting the best results obtained. Thus, we impose an upper limit (currently 5 attempts) on the number of submissions. It is the participant's responsibility to divide the training set into proper training and validation splits. The tuned algorithms should then be run - ideally - only once on the test data and the results of the test set should not be used to adapt the approach.
The evaluation server may not be used for parameter tuning since we rely here on a shared resource that is provided by the Codalab team and its sponsors. We ask each participant to upload the final results of their algorithm/paper submission only once to the server and perform all other experiments on the validation set. If participants would like to report results in their papers for multiple versions of their algorithm (e.g., parameters or features), this must be done on the validation data and only the best performing setting of the novel method may be submitted for evaluation to our server. If comparisons to baselines from third parties (which have not been evaluated on the benchmark website) are desired, please contact us for a discussion.
Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will revoke access and delete submissions. When registering with Codalab, we ask all participants to use a unique institutional email address (e.g., .edu) or company email address. We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com). If you need to use such an email address, then contact us to approve your account.
The provided dataset is based on the data under Creative Commons Attribution-NonCommercial-ShareAlike license and all underlying data remains at all times the property of Sensat Ltd and/or other affiliated Sensat entities. You are free to share and adapt the data, but have to give appropriate credit and may not use the work for commercial purposes.
Specifically, you should consider citing our work:
@inproceedings{hu2020towards,
title={Towards Semantic Segmentation of Urban-Scale 3D Point Clouds: A Dataset, Benchmarks and Challenges},
author={Hu, Qingyong and Yang, Bo and Khalid, Sheikh and Xiao, Wen and Trigoni, Niki and Markham, Andrew},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2021}
}
For more information, please visit our project page at https://github.com/QingyongHu/SensatUrban.
Before you can submit your first results, you need to register with CodaLab and login to participate. Only then you can submit results to the evaluation server, which will score your submission on the non-public test set.
Important note: It is NOT allowed to register multiple times to the server using different email addresses. We are actively monitoring submissions and we will revoke access and delete submissions. When registering with Codalab, we ask all participants to use their unique institutional email address (e.g., .edu) or company email address. We will not approve email addresses from free email services anymore (e.g., gmail.com, hotmail.com, qq.com). If you need to use such an email address, then contact us to approve your account.
Good luck with your submission!
Qingyong Hu, University of Oxford
Meida Chen, University of Southern California - Institute for Creative Technologies
Tai-Ying Cheng, University of Oxford
Sheikh Khalid, Sensat
Bo Yang, The Hong Kong Polytechnic University
Ronald Clark, Imperial College London
Yulan Guo, National University of Defense Technology
Ales Leonardis, University of Birmingham
Niki Trigoni ,University of Oxford
Andrew Markham, University of Oxford
Please contact Qingyong Hu if you have any questions.
Start: April 12, 2021, midnight
Description: Train and test your model with in a fully-supervised way
Jan. 1, 2050, 11 p.m.
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | salientman | 0.6870 |
2 | timeAssassin7 | 0.6840 |
3 | yanxugg | 0.6810 |