FloodNet Challenge @ EARTHVISION 2021 - Track 1

Organized by binalab - Current server time: April 2, 2025, 3:37 p.m. UTC

First phase

Model Development Phase
March 26, 2021, midnight UTC

End

Competition Ends
May 15, 2021, midnight UTC

FloodNet Challenge Track 1 - EARTHVISION 2021

Challenge Overview

Frequent, and increasingly severe, natural disasters threaten human health, infrastructure, and natural systems. The provision of accurate, timely, and understandable information has the potential to revolutionize disaster management. For quick response and recovery on a large scale after a natural disaster, access to visual data is critically important. The emergence of small unmanned aerial systems (UAS) with inexpensive sensors presents not only the opportunity to collect large-scale high-resolution data after each natural disaster but also facilitates rapid data collection from hard-to-reach areas, where data collection tasks can be unsafe for humans if not impossible.  However, analyzing and extracting meaningful information from such large datasets remains a significant challenge in scientific communities.

FloodNet provides high-resolution UAS imageries with detailed semantic annotation. To advance the damage assessment process for post-disaster scenarios, we present a unique challenge considering classification, semantic segmentation, visual question answering highlighting the UAS imagery-based FloodNet dataset.

Track 1:

This track focuses on Image Classification and Semantic Segmentation for post-disaster damage assessment. 

  • Semi-Supervised Classification: Classification for FloodNet dataset requires classifying the images into ‘Flooded’ and ‘Non-Flooded’ classes. Only a few of the training images have their labels available, while most of the training images are unlabeled. 

  • Semi-Supervised Semantic Segmentation: The semantic segmentation labels include: 1) Background, 2) Building Flooded, 3) Building Non-Flooded, 4) Road Flooded, 5) Road Non-Flooded, 6) Water, 7) Tree, 8) Vehicle, 9) Pool, 10) Grass. Only a small portion of the training images have their corresponding masks available.

Evaluation Metric

For semantic segmentation, the evaluation metric is mean intersection over union over all the pixel-level classes. For image classification, the metrics are accuracy and F1-score.

Submission Criteria

  • For classification, participants have to submit a JSON file titled "image_classes.json". This JSON file should contain image numbers (without the file extension) and the corresponding predicted class label. For example, images numbered as 1111.jpg and 2222.jpg predicted as 'Flooded' (label: 0) and 'Non-Flooded' (label: 1) classes respectively, should be present in the JSON file as:- {"1111":0, "2222": 1}
  • For semantic segmentation participants have to submit a folder that will contain predicted mask in the following format: "imageID.png". For example if the test image is "1.jpg", the predicted mask should be "1.png".
  • Participants are expected to put all the predicted masks from semantic segmentation and the JSON file of image classification in a single folder, compressed into a *.zip folder. An example of the submission folder is as follows:

  • submission.zip
    • 1.png
    • 2.png
    • 3.png
    • image_classes.json

  • For creating a zipped folder, it is recommended to select all images and json file together, and compressing them into a single folder; rather than compressing the container or the parent folder. The latter technique ends up creating multiple child folders which can hamper the execution of the scoring program.

At the end of Phase 2, you are expected to submit a short paper (4 pages) describing your method and results. Please use the CVPR template. Submit on https://cmt3.research.microsoft.com/EARTHVISION2021 by selecting the Track as Challenge - FloodNet.

Terms and Conditions

This challenge is ruled by the following license and all algorithms, models, solutions and artifacts built or derived from our data cubes will also be open sourced under the same open source license.

Organizers

  • Maryam Rahnemoonfar, Computer Vision and Remote Sensing Laboratory (Bina Lab), University of Maryland Baltimore County (UMBC) (maryam@umbc.edu)
  • Masoud Yari, Bina Lab, UMBC (yari@umbc.edu)
  • Tashnim Chowdhury, Bina Lab, UMBC
  • Argho Sarkar, Bina Lab, UMBC
  • Debvrat Varshney, Bina Lab, UMBC
  • Robin Murphy, Texas A&M University
  • Catherine Bohn, Dewberry

Model Development Phase

Start: March 26, 2021, midnight

Description: Please submit your result on the validation data. In this phase, you can fine-tune your model. The feedback will be provided on the validation set through the leaderboard.

Final Phase

Start: May 10, 2021, midnight

Description: Please submit your result on the test data. The test data will be released when the final phase begins.

Competition Ends

May 15, 2021, midnight

You must be logged in to participate in competitions.

Sign In