In the surface normal estimation challenge, participants have to estimate unoriented surface normals for point clouds. Point clouds of CAD models are provided, which are randomly sampled from the surface in varying densities: 1k, 5k and 10k points. For all points, ground truth surface normals derived from the CAD surface descriptions are given in the training set and have to be estimated for the validation and test set. Each point can have 1-3 unit length normals: 1 for patch points, 2 for edge points and 3 for points vertices. For corners with more than 3 adjacent surfaces, 3 of the normals are randomly picked.
For details about the other ABC Geometry Challenges and the workshop visit:
https://sites.google.com/view/dlgc-workshop-cvpr2020/home
Please refer to the following paper if you participate in this challenge or use the dataset for your approach:
@InProceedings{Koch_2019_CVPR,
author = {Koch, Sebastian and Matveev, Albert and Jiang, Zhongshi and Williams, Francis and Artemov, Alexey and Burnaev, Evgeny and Alexa, Marc and Zorin, Denis and Panozzo, Daniele},
title = {ABC: A Big CAD Model Dataset For Geometric Deep Learning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}
We supply a training dataset with ground truth normals for training your model, a validation dataset without ground truth normals for which you can upload your estimations and receive feedback on your performance as well as a testing dataset without ground truth normals for which you can submit your estimations without immediate feedback. The evaluation on the testing dataset will result in the final score.
One unit length normal has to be estimated per point and all estimated normals for one point cloud have to be written into a text file with the same name as the point cloud file and in the same order as the points (see example submission in starting kit).
There are two evaluation modes:
The estimated point normals are compared to the ground truth normals with the following evaluation function: (n^T x e)^2, with n as the ground truth normal and e as the estimated normal, both in unit length. The scores are calculated as the mean over of all normals and all models, with 1.0 being the highest score and 0.0 the lowest. Note that this function does not account for the orientation of the normals. See our starting kit for python evaluation code.
Participants can train or optimize their approach on the supplied training dataset and validate their performance on the validation dataset. To participate in the challenge, they can submit the estimated results for the testing dataset until the submission end date. The final evaluation will be run on the estimated normals for the testing data.
Start: April 6, 2020, midnight
Description: Training and Validation
Start: Dec. 1, 2020, midnight
Description: Testing
Dec. 31, 2020, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In