ABC: Sharpness Fields Extraction - CVPR 2020

Organized by artonson - Current server time: Jan. 18, 2021, 8:12 a.m. UTC

First phase

June 1, 2020, midnight UTC


Competition Ends
Dec. 31, 2020, 11:59 p.m. UTC

ABC Geometry Challenge: Sharpness Fields Extraction

In the sharpness fields extraction challenge, participants have to estimate distances to the closest feature lines for 3D point clouds. In the present dataset and challenge, feature lines are identified with surface lines where surface normals undergo a change of at least 18°. Point clouds of CAD models are provided, which are randomly sampled from the surface with 4K points. For all points, ground truth distances to nearest feature lines derived from the CAD surface descriptions are given in the training set and have to be estimated for the validation and test sets.

For details about the other ABC Geometry Challenges and the workshop visit:

Please refer to the following paper if you participate in this challenge or use the dataset for your approach:

author = {Koch, Sebastian and Matveev, Albert and Jiang, Zhongshi and Williams, Francis and Artemov, Alexey and Burnaev, Evgeny and Alexa, Marc and Zorin, Denis and Panozzo, Daniele},
title = {ABC: A Big CAD Model Dataset For Geometric Deep Learning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}


We supply a training dataset with ground truth distance field values for training your model, a validation dataset without ground truth distance field values, for which you can upload your estimations and receive feedback on your performance, as well as a testing dataset without ground truth distance field values for which you can submit your estimations without immediate feedback. The evaluation on the testing dataset will result in the final score.

One nonnegative floating-point distance value has to be estimated per point, and all estimated distance values for the point cloud have to be written into a text file with the same index as the point cloud file, suffixed by "_target", and in the same order as the points (see example submission in starting kit).

For evaluation, the estimated per-point distance values dPRED are first clipped to [0, 1] range by taking CLIP(dPRED) = max(0, min(1, dPRED)) and then compared to the ground truth distances with the root mean squared error (RMSE) evaluation function: SQRT(||dGT — CLIP(dPRED)||2), with dGT as the ground truth distance label (thresholded to 1.0). The scores are calculated as the mean over of all predicted distance values for all points. See our starting kit for python evaluation code.

The final reported score is the mean over RMSE scores computed for each of the point clouds, computed separately for different resolutions (HighRes, MedRes, and LowRes). In total, 3 scores will be computed:

  • HighRes All (high_res_rmse_score)
  • MedRes All (med_res_rmse_score)
  • LowRes All (low_res_rmse_score)

Note: you can submit results separately for different resolutions (e.g., HighRes, MedRes, or LowRes only) to avoid evaluation timeouts.

Terms and Conditions

Participants can train or optimize their approach on the supplied training dataset and validate their performance on the validation dataset. To participate in the challenge, they can submit the estimated results for the testing dataset until the submission end date. The final evaluation will be run on the estimated distances for the testing data.

Download Size (mb) Phase
Starting Kit 0.137 #1 Development
Public Data 0.001 #1 Development
Public Data 0.001 #2 Final


Start: June 1, 2020, midnight

Description: Development phase: submit results for evaluation, with feed-back provided on the validation set only.


Start: Dec. 1, 2020, midnight

Description: Final phase: submissions from the previous phase are automatically cloned and used to compute the final score, with feed-back provided on the full test set.

Competition Ends

Dec. 31, 2020, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In