2018 Open AI Tanzania Building Footprint Segmentation Challenge

Organized by jordan - Current server time: Nov. 16, 2018, 11:35 a.m. UTC

First phase

Oct. 1, 2018, midnight UTC


Competition Ends
Nov. 10, 2018, midnight UTC

Welcome to the Open AI Tanzania Challenge!

Open AI Tanzania — is a partnership with our friends at the State University of Zanzibar (SUZA), WeRobotics, World Bank, OpenAerialMap and Tanzania Flying Labs. Open AI Tanzania invites data scientists to develop feature detection algorithms that can automatically identify buildings and building types using high-resolution aerial imagery collected by Tanzanian drone pilots through the Zanzibar Mapping Initiative (ZMI). The goal of this challenge is to correctly segment and classify building footprints under various stages of construction.

Our core mission at WeRobotics is to localize opportunity, hence our Flying Labs. This explains why we’re also keen to invite local participation in this challenge. We’re therefore teaming up with our friends at Black in AI and DataKind to directly invite African data scientists across the country, continent and the world to this Tanzanian project. All our Open AI Challenges are of course open, which means everyone is invited to participate.

The winning machine learning classifiers will be used by our partners to inform a wide range of social good efforts across Zanzibar and the rest of Tanzania. This includes (and is not limited to) urban planning, public safety, public health, disaster response, environmental protection, sustainable development and census data. As such, we encourage participants who take up the Tanzania Challenge to consider making their classifiers open source.

Participants will be given a set of images to evaluate their algorithm on.  For every image your algorithm should segment and classify all building footprints found within the image into one of three classes (foundation, unfinished, completed).

The Spacenet challenge provides a good resource and scripts on how to get started working with geospatial data for this kind of problem.  We are very grateful that they spent the time to create such valuable resources.

For every image there should be a .csv file with the same name containing the output of your algorithm.  The csv's must be placed into a zip file to be submitted.  On Mac/Linux systems. 

zip -jr submission.zip ./submission/

Will work were any name can be used for submission.


Each CSV should have the following format.

1,0.30,0.50,0.20,"POLYGON ((-5.7226 39.3043, -5.7227 39.3048, ... , -5.722 39.3043))","POLYGON ((714 978, 892 1045, ... , 714 978))"

"building_id" should be a unique identifier for each predicition made.

"conf_foundation", "conf_unfinished" and "conf_completed" should be the score on how confident your classifer was at predicting the condition of a building.  For leaderboard and evaluation purposes a prediction will be assigned to the class with the maximum score. 

"coordinates" should define a polygon prediction in Well-known text format (both in pixel coordinates (x,y) with (0,0) located in the top left corner and in (long, lat) geographic coordinates) of where a building footprint was detected in the image.  The first and last value should be the same.

Participants are ranked in the following way:

  • For each building footprint groundtruth we find the prediction with the same class that it has the highest jaccard index with.  If it is > 0.5 we count it as a True Positive (TP). 
    • If multiple predictions overlap with a single ground truth we use the one with the highest jaccard index and mark that prediction as counted.  This is so it isn't compared against any other ground truth segmentation.
    • If a prediction does not intersect with a ground truth polygon it is counted as a False Postive (FP)
    • If a ground truth doesn't overlap with any prediction we count it as a False Negative (FN).
  • The Precision, Recall and F Score for each class is calculated accross all images.  
  • For each of the 3 classes their F scores are averaged together to give a final score for the performance of the classifier.


Start: Oct. 1, 2018, midnight

Final Test

Start: Nov. 1, 2018, midnight

Competition Ends

Nov. 10, 2018, midnight

You must be logged in to participate in competitions.

Sign In