Holistic 3D Vision Challenge - HoliCity Plane Detection Track @ ECCV 2020

Organized by zhou13 - Current server time: July 10, 2020, 3:59 p.m. UTC

Current

Test-Development
May 23, 2020, midnight UTC

Next

Test-Final
July 24, 2020, midnight UTC

End

Competition Ends
Never

This is the submission site for the Holistic 3D Vision Challenge of HoliCity Plane Detection Track. In this track, your algorithm need to detect all the planes along with their positions and normals given a perspective RGB image.

Download

The download link of the dataset can be found under "Participate / Get Data" and "Participate / Files".

Format

For each training image, we provide all the 3D planes in that image, its dense depth map, and its normal maps. The planes of an image are represented by two files: *_plan.png and *_plan.npz. The PNG file is a 16-bit monochrome image representing the mask of each planes, in which the intensity of each pixel represents the index of the plane that the pixel belongs to. A zero intensity represents that this pixel does not belong to any plane. The NPZ file stores the parameters w of each plane, where the equation of the plane is w \cdot x = 1 The index in the PNG files corresponds to the NPZ files with offset -1.

We also provide the dense depth/normal maps for extra supervision. They are optional for this challenge. The depth map *_dpth.npz stores the z-buffer of each pixel with the unit in meters. The normal map *_nrml.npz stores the outward normal direction, a 3D vector, of each pixel in the camera space.

In the starting kit, we provide code to overlay the planes with images and turn them (*_plan.png and *_plan.npz) into depth maps and normal maps.

Submission

HoliCity comes with training, validation, and testing splits. You need to submit your detected planes for the data in both the validation and testing splits. Data in the training and validation splits have the ground truth labelling (planes, depth, and normal), while the data in the testing split only contains the input RGB images.

For each input RGB image, you need to detect the masks of planes inside it, along with the plane parameters w. The output format is same as the way how the planes are represented in the training data. You need to generate a 16-bit PNG file storing the pixel-wise plane indices (i.e., masks of planes) and a NPZ file storing w and the confidence of that plane. The lower the index of a plane is, the higher the confidence of this plane is. Only the first 500 planes are considered. See the sample submission in the starting kit for more details.

In Phase 1 (Validation), the leaderboard will display the performance metrics based on the validation split only. In Phase 2 (Final Test), we will evaluate your performance on the testing splits and notify the winner based on the result. You do not need to do anything extra in Phase 2 if you have already uploaded the results for the testing splits.

Timeline

May 23, 2020 Challenge Launch
July 31, 2020 Challenge Deadline
August 7, 2020 Paper Submissions Deadline
August 10, 2020 Paper Notification
August 23, 2020 Workshop Date (Co-located with ECCV 2020)

Deadlines are at midnight UTC.

Evaluation Criteria

We evalaute accuracy of your results based on both the 2D and 3D metrics. The 2D metric evalautes the correctness of planes' 2D segmentation masks. We use the mean average precision (labeled as mAP) implementation from the detection evaluation used by the coco dataset. The 3D metric evaluates both the correctness of plane parameters along with segmentation masks. We compute the per-plane depth recall curves and uses the area under the curve (labeled as APDR) as our 3D metric. The per-plane depth recall curves are often used by plane detection researches, such as Plane RCNN and Associative Embedding. We cut off the thresholds of depth at 30m.

Terms and Conditions

  • Each submission should be associated with a team and its real affiliation.
  • You are allowed to use external data and pretrained models. If you use any external data or pretrained models (e.g., pretrained ImageNet), please write them down in the "method description" when submitting your results.

Dataset

This dataset is distributed by Professor Yi Ma's group at the EECS Department of UC Berkeley. It is made available for non-commercial purposes such as academic research, teaching, scientific publications, and personal experimentation. Please cite the HoliCity if you participate in the challenge or find this dataset helpful in your research:

@inproceedings{zhou2020holicity,
  author={Zhou, Yichao and Huang, Jingwei and Dai, Xili and Luo, Linjie and Chen, Zhili and Ma, Yi},
  title={{HoliCity}: A City-Scale Dataset for Learning Holistic 3D Structures},
  booktitle={Not yet published},
  year={2020}
} 

Disclaimer: The street-view images are owned and copyrighted by Google Inc. The CAD models used to make the dataset are owned by AccuCities Inc. The refinement of images' geographic information and rendering of CAD models are done by UC Berkeley. For any commercial or profitable usage of the datasets, one must seek explicit permissions from each of these rightful owners.

Test-Development

Start: May 23, 2020, midnight

Description: During this phase, you submit the results of test data. Feedbacks are provided on the partial testing data.

Test-Final

Start: July 24, 2020, midnight

Description: You do not need to do anything in this phase. The results on the all the test images will be revealed when the organizers make them available.

Benchmark

Start: Aug. 1, 2020, midnight

Description: The challenge has ended. But you can continue to submit your result and test on the server.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In