DeepGlobe Building Extraction Challenge

Organized by dlindenbaum - Current server time: July 22, 2018, 4:40 p.m. UTC

First phase

March 1, 2018, 6 p.m. UTC


Competition Ends
May 15, 2018, 11:59 p.m. UTC

DeepGlobe Building Extraction Challenge

Modeling population dynamics is of great importance for disaster response and recovery, and detection of buildings and urban areas are key to achieve so. We would like to pose the challenge of automatically detecting buildings from satellite images. This problem is formulated as a binary segmentation problem to localize all building polygons in each area. The evaluation will be based on the overlap of detected polygons with the ground truth.

For details about other DeepGlobe challenges and the workshop:

Please refer to the following paper if you participate in this challenge or use the dataset for your approach:

author = {Demir, Ilke and Koperski, Krzysztof and Lindenbaum, David and Pang, Guan and Huang, Jing and Basu, Saikat and Hughes, Forest and Tuia, Devis and Raskar, Ramesh},
title = {DeepGlobe 2018: A Challenge to Parse the Earth Through Satellite Images},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}



The problem is a object segmentation problem. Each input is a satellite image. You must predict a set of polygons that describe the buildings in the image.

There are 2 phases:

  • Phase 1: Development phase. We provide you with labeled training data and unlabeled test data. Make predictions for both datasets. However, you will receive feed-back on your performance on the test set only. The performance of your LAST submission will be displayed on the leaderboard.
  • Phase 2: Final phase. Please complete this phase using the Validation dataset. Your performance on the validation dataset set will appear on the leaderboard when the organizers finish checking the submissions.

You only need to submit the prediction results (no code). However you need to submit your a short paper of 3 pages (+1 page for references) before May 1st to be eligible for the final phase. We will evaluate your methodology and your results in parallel. Paper submission is open at and please use the CVPR paper template.

The submissions are evaluated using the IoU (Intersection over Union) metric.

The Metric

In SpaceNet Challenge, the metric for ranking entries is based on the Jaccard Index, also called the Intersection-over-Union (IoU). For more information read the full article on The DownlinQ.

and see below

Evaluation Metric

The evaluation metric for this competition is an F1 score with the matching algorithm inspired by Algorithm 2 in the ILSVRC paper applied to the detection of building footprints. For each building there is a geospatially defined polygon label to represent the footprint of the building. A SpaceNet entry will generate polygons to represent proposed building footprints. Each proposed building footprint is either a “true positive” or a “false positive”.

  • The proposed footprint is a “true positive” if the proposal is the closest (measured by the IoU) proposal to a labeled polygon AND the IoU between the proposal and the label is about the prescribed threshold of 0.5.
  • Otherwise, the proposed footprint is a “false positive”.

There is at most one “true positive” per labeled polygon. The measure of proximity between labeled polygons and proposed polygons is the Jaccard similarity or the “Intersection over Union (IoU)”, defined as:

alt text

The value of IoU is between 0 and 1, where closer polygons have higher IoU values.

The F1 score is the harmonic mean of precision and recall, combining the accuracy in the precision measure and the completeness in the recall measure. For this competition, the number of true positives and false positives are aggregated over all of the test imagery and the F1 score is computed from the aggregated counts.

For example, suppose there are N polygon labels for building footprints that are considered ground truth and suppose there are M proposed polygons by an entry in the SpaceNet competition. Let tp denote the number of true positives of the M proposed polygons. The F1 score is calculated as follows:

alt text

The F1 score is between 0 and 1, where larger numbers are better scores.


  • The images provided could contain anywhere from zero to multiple buildings.
  • All proposed polygons should be legitimate (they should have an area, they should have points that at least make a triangle instead of a point or a line, etc).
  • Use the metric implementation code to self evaluate.

Example Implementations

To jump start the thought process and the implementation, you can take a look at some open source solutions below:


This challenge is governed by DeepGlobe Rules , and all data for the Deep Globe Building Extraction Challenge is CC BY-SA 4.0


Start: March 1, 2018, 6 p.m.

Description: Directly submit results on validation and/or test data; feed-back are provided on the validation set only. Do not forget to submit the short paper about your methodology before May 1st!


Start: May 2, 2018, 11 p.m.

Description: Please complete this phase using the Validation dataset. The results on the test set will be revealed when the organizers make them available.

Competition Ends

May 15, 2018, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In