MAFAT Challenge - Fine-Grained Classification from Aerial Imagery

Organized by yuvalsh - Current server time: Aug. 17, 2018, 11:03 p.m. UTC
Reward $30,000

First phase

Public Test Phase
Sept. 1, 2018, midnight UTC

End

Competition Ends
Dec. 1, 2018, midnight UTC

Welcome to MAFAT CHALLENGE - Fine-Grained Classification from Aerial Imagery

In this competition, MAFAT DDR&D (Directorate of Defense Research & Development) would like to tackle the challenge of automatically exploiting fine-grained information from aerial imagery data. As the volume of imagery gathered by aerial sensors is rapidly growing, we understand that the exploitation of such data could not be achieved solely by a manual image interpretation process. The competition’s objective is to explore automated solutions that will enable fine-grained classification of objects in high resolution aerial imagery.

Participants’ goal is to classify different objects found in aerial imagery data. The classification includes a coarse-grained classification for main classes (for example Large vehicle) and fine-grained classification of subclasses and unique features (for example sunroof).

 

Prizes

1st Prize: $15,000
2nd Prize: $10,000
3rd Prize: $5,000

Data

The dataset consists of aerial imagery taken from diverse geographical locations, different times, resolutions, area coverage and photo conditions (weather, angles, and lighting). Image resolution varies between 5cm to 15cm GSD (Ground Sample Distance).

Few examples are presented below:

As can be seen, images include many different types of objects, such as: vehicles, roads, buildings, trees, etc.

Task Specifications

Participants would be asked to classify objects in four granularity levels:

  1. Class - In this category objects are divided according to their measures. The category contains ‘Large vehicles’ and ‘Small vehicles’. Each classification in this category includes a single value.
  2. Subclass - In this category objects are divided according to their function or designation. For example: Cement mixer, Crane truck, Prime mover, etc. Each classification in this category includes a single value.
  3. Presence of Features - This category deals with the identification of each object’s unique characteristics. For example: has a Ladder? is Wrecked? has Sunroof? etc. Each classification in this category may include multiple values.
  4. Object perceived color - This category deals with the identification of the object’s color. For example: Blue, Red, Yellow etc. Each classification may include a single value (single or multiple values).

Here is a full description of general classes tagging information:

  1. Small vehicle
    1. Subclasses - Sedan, Hatchback, Minivan, Van, Pickup truck, Jeep, Public vehicle.
    2. Features - Sunroof, Luggage Carrier, Open cargo area, Enclosed cab, Wrecked, Spare wheel.
    3. Colors - Yellow, Red, Blue, Black, Silver/Grey, White, Other.
  2. Large vehicle
    1. Subclasses - Truck, Light Truck, Cement mixer, Dedicated agricultural vehicle, Crane truck, Prime mover, Tanker, Bus, Minibus.
    2. Features - Open cargo area, AC Vents, Wrecked, Enclosed box, Enclosed cab, Ladder, Flatbed, Soft shell box, Harnessed to a cart.
    3. Colors - Yellow, Red, Blue, Black, Silver/Grey, White, Other.

Small Vehicles

Designation Features Color
Sedan Sunroof Yellow
Sedan Sunroof Yellow
Hatchback Luggage carrier Red
Hatchback Luggage carrier Red
Minivan Open cargo area Blue
Minivan Open cargo area Blue
Van Enclosed cab Black
Van Enclosed cab Black
Pickup truck Wrecked Silver/Grey
Pickup truck Wrecked Silver
Jeep Spare wheel White
Jeep Spare wheel White
    Green
    Other
    Other
    Other

 

Large Vehicles

Designation Features Color
Truck AC Vents Yellow
Truck AC Vents Yellow
Prime Mover Enclosed Box Red
Prime mover enclosed box Red
Light Truck Wrecked Blue
Light truck Wrecked Truck Blue
Crane Truck Soft Shell Box Black
Crane Soft Shell Box Black
Concrete Mixer Truck Ladder Silver/Grey
Concrete Mixer truck Ladder Silver
Dedicated Agricultural
Vehicle
Open Cargo Area White
agriculture Open Cargo Area White
Minibus Harnessed to a Cart Green
Minibus Harnessed to a cart Other
Bus Flatbed Other
Bus Flatbed Other
Tanker
tanker

Training Set:

Participants will receive a training set, consisting of:

  • 1,697 tiff and jpeg images.
  • CSV file of tagged objects - Each object is represented by a tag ID, an ID of the image in which it is located, and a bounding polygon , which is a set of 4 x-y (pixel) coordinates. Additionally, each object in the training set includes fine-grained classification data: class, subclass, features and color. Please note that features are represented as boolean fields while “-1” represent a non-viable option.
Table_1

Table-1: Tags CSV file of the training-set

Test Set:

Participants will also receive a test set, consisting of:

  • 1,421 tiff and jpeg images.
  • CSV file of non-tagged objects - This file includes objects in the same form as in the training set (tag ID, image ID and a bounding polygon), but without the classification data.
Table_2

Table-2: CSV file of the test-set

File Submission:

Participants are asked to accurately classify all tagged objects that appear in the provided set, according to the four classification categories (Class, Subclass, Features, and Color).

The submission file should include all the tagged objects that appear in the test-set in each category. For each category, the list of tagged objects should be sorted by object's confidence level (high to low). Objects that do not belong to this category should have a negligible probability and the ranking between them is insignificant.

This format is presented in Table-3.

Table_3

Table-3: Submission file format

Competition Phases:

This competition has two phases: public and private.

In the public phase, submission limit is five per day, and submissions will be published on the competition leaderboard.

In the private phase, a total of three submissions is allowed, and these submissions' grades will not be published on the leaderboard, nor will they be available to the user or group who submitted them. Please note that this phase is only three days long

The dataset is similar for both phases

Contest Forum:

The contest forum is held on google groups. Please send us a preferred email address for your participation in the forum (to mafatchallenge@webiks.com), and we will add you. Because we prefer not to flood our participants with emails, we chose not to inform you on new forum threads, so checking the forum for new threads, Q&A etc. is your responsibility.

Possible classification challenges:

  • The ground resolution varies across images
  • Due to human errors, the training set may contain up to 5% tag errors
  • The training set doesn’t necessarily represent the “Test Set”, in that the number of objects in each of the various classes is unbalanced between the sets
  • Some subclasses and features contain a small number of objects (dozens) in the training and test set, while some contain hundreds or more tagged objects

Contact information:

mafatchallenge@webiks.com

Welcome to MAFAT (DDR&D) Challenge

Evaluation

For each category, an average precision index will be calculated separately. Then, a Quality Index will be calculated as the average of all average precision indices (Mean Average Precision).

Average Precision (per category)

The score will be calculated for each category separately according to the following formula:

K is the total number of objects from the class in the test data (ground true).

Precision(k) is the precision calculated over the first k objects.

and rel(k) equals 1 if the object k is True and 0 if the object is False

Quality Index (Mean Average Precision)

After calculating the per category Average Precision, the total score will be determined using Mean Average Precision (MAP). Every category in the fine-grained classification has the same weight in the total score. Every category in the fine-grained classification has the same weight in the total score. therefore the weight of a small sample size category (e.g. minibus, 25 objects in the training dataset) is equal to a large sample size category (e.g sedan, 5783 objects in the training dataset). This index varies between 0 to 1 and emphasizes correct classifications with significance to confidence in each classification, meaning to distinguish between participants that classify all objects correctly, in all environmental conditions, as well as can reference their confidence in the classification.

The total score is:

When Nc is the number of categories.

Terms and Conditions

COMPETITION RULES

  1. Competition Title: MAFAT Challenge - Fine-Grained Classification from Aerial Imagery.
  2. This Competition is organized by the Israeli Ministry Of Defense (“Competition Organizer”). Webiks shall assist the Competition Organizer with the execution of this competition including paying to the competition winners.
  3. This competition is public, but the Competition Organizer approves each user’s request to participate, and may not approve participance, according to its own Considerations.
  4. Submission Format: CSV containing participant’s predictions.
  5. Users: Each participant must open a CodaLab account to register. Only one account per user is allowed.
  6. If you are entering as a representative of a company, educational institution or other legal entity, or on behalf of your employer, these rules are binding on you, individually, and/or the entity you represent or are an employee. If you are acting within the scope of your employment, as an employee, contractor, or agent of another party, you warrant that such party has full knowledge of your actions and has consented thereto, including your potential receipt of a prize. You further warrant that your actions do not violate your employer’s or entity’s policies and procedures.
  7. Teams: Participants are allowed to form teams. There are no limitations on the number of participants in the team. You may not participate on more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted within 7 days of the competition deadline listed on the competition Website. Participants who would like to form a team should advise ‘Competition Teams’ section in CodaLab’s ‘user_teams’ Wiki page. In order to form a valid team, the total submission count of all team’s participants must be less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running.
  8. Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted within 7 days of the competition deadline listed on the competition Website. Team mergers are allowed and can be performed by the team leader. In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running. The organizers don't provide any assistance regarding the team mergers.
  9. External data: You may use data other than the competition data to develop and test your models and Submissions. However, any such external data you use for this purpose must be available for use by all other competition participants. Thus, if you use external data, you must make it publicly available and declare it in the competition discussion forum in CodaLab platform.
  10. Competition Duration: 3 months (September 1st to December 1st)
  11. Total Prize Amount (USD): $ 30,000
  12. Prize Allocation:
    1. 1st Prize: $15,000
    2. 2nd Prize: $10,000
    3. 3rd Prize: $5,000
  13. Upon Being Awarded a Prize:
    1. Prize winner must deliver to the Competition Organizer the final model’s software code as used to generate the winning submission and associated documentation written in English. The delivered software code must be capable of generating the winning submission and contain a description of resources required to build and/or run the executable code successfully.
    2. Prize winner must deliver the software code packaged in docker.
    3. Prize winner must agree to an interview, in which the winning solution will be discussed.
    4. Prize winner will grant to Competition Organizer a Non-exclusive license to the winning model’s software code and represent that you have the unrestricted right to grant that license.
    5. Prize winner will sign and return all Prize acceptance documents as may be required by Competition Organizer.
  14. If a team wins a monetary prize, competition sponsor will allocate the prize money in even shares between team members unless the team unanimously contacts the competition sponsor within three business days following the submission deadline to request an alternative prize distribution.

TERMS AND LEGAL CONSIDERATIONS:

  1. This competition is organized by the Israeli Ministry Of Defence. Therefore, participation in this competition is subjected to Israeli law.
  2. The Competitions is open worldwide, unless if you are a resident of Crimea, Cuba, Iran, Syria, North Korea, Sudan, Lebanon, Iraq or are subject to Israel export controls or sanctions as mentioned in the Israeli law.
  3. The competition is public, but the Competition Organizer may not approve participation, according to his own considerations.
  4. Competition Organizer reserves the right to disqualify any entrant from the Competition if, in Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the Competition by cheating, deception, or other unfair playing practices.
  5. Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraud, or late. Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a Submission that does not meet the requirements.
  6. Officers, directors, employees and advisory board members (and their immediate families and members of the same household) of the competition organisers (Israel Ministry of Defense, Webiks) and their respective affiliates are not eligible to receive any prize in the competition.
  7. You agree to use reasonable and suitable measures to prevent persons who have not formally agreed to these rules from gaining access to the Competition Data. You agree not to transmit, duplicate, publish, redistribute or otherwise provide or make available the Data to any party not participating in the Competition. You agree to notify Competition Organizer immediately upon learning of any possible unauthorized transmission or unauthorized access of the Data and agree to work with Competition Organizer to rectify any unauthorized transmission. You agree that participation in the Competition shall not be construed as having or being granted a license (expressly, by implication, estoppel, or otherwise) under, or any right of ownership in, any of the Data.
  8. By downloading the data for this competition you agree to the following terms:
    1. You will not distribute the data.
    2. You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.
  9. By joining the competition, you warrant and acknowledge that you may not infringe any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition.
  10. The Competition Organizer reserves the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the Competition concerning your identity, residency, mailing address, telephone number, email address, ownership of right, or information required for entering the Competition, you may be immediately disqualified from the Competition.
  11. If you wish to use and publicly publish external data (see section 1.i in this document) you may do so, provided that such public sharing does not violate the intellectual property rights of any third party. Adding and declaring external data is allowed no later than 20\09\18. Adding external data later than this date, or using such data, is a cause for disqualification from the competition.
  12. Participants grant to Competition Organizer the right to use your winning Submissions and the source code used to generate the Submission, for any purpose whatsoever, without further approval.
  13. Prizes are subject to Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules, and the compliance of the winning submissions with the submissions requirements.
  14. Prize winnings will be transferred to the winner by a third party.
  15. Participants that receive funding from Competition's Organizer, in favor of participating in the competition, may win the first, second or third place and appear on the leaderboard, but will not be able to win the competition prize (in which case, the prize will be awarded to the next best competitor\s.
  16. This competition does not constitute an obligation on behalf of the Israeli Ministry Of Defense, to either purchase products or to continue working with any of the participants.
  17. RIGHT TO CANCEL, MODIFY OR DISQUALIFY. Competition Organizer reserves the right at its sole discretion to terminate, modify or suspend the Competition.
  18. For Israeli Ministry of Defense funded participants only: the knowledge and\or code presented by the participants of the competition is for the sole and exclusive use of the sponsor of the competition. The sponsor hereby commits not to transfer the knowledge and\or code to any third party for commercial use.

Public Test Phase

Start: Sept. 1, 2018, midnight

Description: Dear participant, The competition is almost good to go. It officially launches on September 1st, 2018. please do not try to submit anything before that date. We will start revising applications to join the competition on August 26. Once your application is approved, you will receive an email detailing the next steps in order to obtain the dataset and start working on the challenge. Once approved, you'll have access to a dataset containing more than 1,600 high-resolution images, with more than 11,600 (fine-grained) classified objects! Thank you and good luck, The Competition Organizing Team

Private Test Phase

Start: Nov. 27, 2018, midnight

Competition Ends

Dec. 1, 2018, midnight

You must be logged in to participate in competitions.

Sign In