2021 Multi-View Partial (MVP) Challenge
The 3D point cloud is an intuitive representation of 3D scenes and objects, which has extensive applications in various vision and robotics tasks. Unfortunately, scanned 3D point clouds are usually incomplete owing to occlusions and missing measurements, hampering practical usages. Specifically, we focus on two fundamental problems here in this challenge, i.e., point cloud completion (predicting the complete 3D shape from a partially observed point cloud) and registration (estimating a rigid transformation to align a source point cloud to the target one).
Towards an effort to build a more unified and comprehensive dataset for incomplete point clouds, we propose the MVP dataset, a high-quality multi-view partial point cloud dataset, to the community. It contains over 100,000 high-quality scans, which renders partial 3D shapes from 26 uniformly distributed camera poses for each 3D CAD model.
Besides the public training set we have released, the dataset also features a hidden extra test set. The evaluation of MVP Challenge is performed on this hidden test set. Users are required to submit final prediction files, which we shall proceed to evaluate.
To access the MVP dataset and code base, please visit our gihub project, where you could find detailed data descriptions and examples of usage for the completion and registration tracks, respectively.
Users can participate in one or both of the following tracks:
This phase evaluates algorithms for point cloud completion on the MVP dataset. Submit the data in a .zip file, which compresses a .h5 file and it contains ALL the completion results in terms of a (N_test_samples, 2048, 3) array with the `key` set as "results" for online evaluation. Please carefully check the format to ensure a successful submission.
This phase evaluates algorithms for point cloud registration on the MVP dataset. Submit the data in a .zip file, which compresses a .h5 file and it contains ALL the predicted transformation matrixes in terms of a (N_test_samples, 4, 4) array with the `key` set as "results" for online evaluation. Please carefully check the format to ensure a successful submission.
Reminder: public submission is closed by Sep. 12; during the period of private submission (Sep. 12 - 19), the TOP-5 of participants in each track are required to submit the source code and pre-trained model to us for online evaluation. The code would ONLY be used for verifying the legality of the algorithm and would NOT be further distributed.
Please check the terms and conditions for further rules and details.
If you have any questions, please contact us by raising an issue on our gihub project.
We evaluate the reconstruction accuracy by computing the Chamfer Distance between the predicted complete shape (P) and the ground truth shape (Q) as below. The results are averaged across the whole test set for overall evaluation criteria.
We evaluate the reconstruction accuracy by computing the differences between the predicted Transformation (T_pred) and the ground truth Transformation (T_gt) as below. The metric is defined as a weighted summation (denoted as MSE on the leaderboard) for 1) rotation angle differences and 2) translation differences. The results are averaged across the whole test set for overall evaluation criteria.
Please visit our gihub project for details.
The MVP Challenge 2021 will be around eight weeks. The challenge will start together with ICCV 2021, the 3rd Workshop on Sensing, Understanding and Synthesizing Humans. Participants are restricted to train their algorithms on the publicly available MVP training dataset. A hidden test set is used for online evaluation and for maintaining a public leaderboard. The final awards will be revealed around Oct. 2021.
When participating in the competition, please be reminded that:
Before downloading and using the MVP dataset, please agree to the following terms of use. You, your employer, and your affiliations are referred to as "User." The authors and their affiliations, SenseTime, are referred to as "Producer."
@inproceedings{pan2021variational, title={Variational Relational Point Completion Network}, author={Pan, Liang and Chen, Xinyi and Cai, Zhongang and Zhang, Junzhe and Zhao, Haiyu and Yi, Shuai and Liu, Ziwei}, journal={arXiv preprint arXiv:2104.10154}, year={2021} }
Start: July 12, 2021, midnight
Description: This track evaluates algorithms for point cloud completion on the MVP dataset.Submit the data in a .zip file, which compresses a .h5 file and it contains ALL the completion results in terms of a (N_test_samples, 2048, 3) array for online evaluation. Please carefuly check the format to ensure a successful submission.
Start: July 12, 2021, midnight
Description: This track evaluates algorithms for point cloud registration on the MVP dataset.Submit the data in a .zip file, which compresses a .h5 file and it contains ALL the predicted transformation matrixs in terms of a (N_test_samples, 4, 4) array for online evaluation. Please carefuly check the format to ensure a successful submission.
Sept. 12, 2021, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | alanhsu24 | 1.00000 |
2 | Wy_Z | 1.00000 |
3 | Myles | 1.00000 |