MAFAT Challenge - Fine-Grained Classification of Objects from Aerial Imagery

Organized by yuvalsh - Current server time: March 6, 2025, 3:53 p.m. UTC
Reward $30,000

First phase

Public Test Phase
Sept. 1, 2018, midnight UTC

End

Competition Ends
Dec. 1, 2018, midnight UTC

Welcome to MAFAT Challenge - Fine-Grained Classification of Objects from Aerial Imagery

In this competition, MAFAT DDR&D (Israel Directorate of Defense Research & Development) would like to tackle the challenge of automatically exploiting fine-grained information from aerial imagery data. As the volume of imagery gathered by aerial sensors is rapidly growing, we understand that the exploitation of such data could not be achieved solely by a manual image analysis process. The competition’s objective is to explore automated solutions that will enable fine-grained classification of objects in high-resolution aerial imagery.

Participants’ goal is to classify different objects found in aerial imagery data. The classification includes a coarse-grained classification for main classes (for example - large vehicle) and fine-grained classification of subclasses and unique features (for example - a car that has a sunroof).

Prizes

1st Place: $15,000
2nd Place: $10,000
3rd Place: $5,000

Data

The dataset consists of aerial imagery taken from diverse geographical locations, different times, resolutions, area coverage and image acquisition conditions (weather, sun direction, camera direction, etc). Image resolution varies between 5cm to 15cm GSD (Ground Sample Distance).

Few examples are presented below:

 

Task Specifications

Participants are asked to classify objects in four granularity levels:

  1. Class - every object is categorized into one of the following major classes: 'Large Vehicles' or 'Small Vehicles'.
  2. Subclass - objects are categorized to subclasses according to their function or designation, for example: Cement mixer, Crane truck, Prime mover, etc. Each object should be assigned to a single subclass.
  3. Presence of features - objects are labeled according to their characteristics. For example: has a Ladder? is Wrecked? has a Sunroof? etc. Each object may be labeled with multiple different features
  4. Object perceived color - Objects are labeled with their (human) percieved color.  For example: Blue, Red, Yellow etc. Each object includes a single color value.

Here is a full description of the competition dataset's tagging hierarchy:

  1. Small vehicle
    1. Subclasses - Sedan, Hatchback, Minivan, Van, Pickup truck, Jeep, Public vehicle.
    2. Features - Sunroof, Luggage carrier, Open cargo area, Enclosed cab, Wrecked, Spare wheel.
    3. Colors - Yellow, Red, Blue, Black, Silver/Grey, White, Other.
  2. Large vehicle
    1. Subclasses - Truck, Light truck, Cement mixer, Dedicated agricultural vehicle, Crane truck, Prime mover, Tanker, Bus, Minibus.
    2. Features - Open cargo area, AC vents, Wrecked, Enclosed box, Enclosed cab, Ladder, Flatbed, Soft shell box, Harnessed to a cart.
    3. Colors - Yellow, Red, Blue, Black, Silver/Grey, White, Other.

Small Vehicles

Subclass Features Color
Sedan Sunroof Yellow
Sedan Sunroof Yellow
Hatchback Luggage carrier Red
Hatchback Luggage carrier Red
Minivan Open cargo area Blue
Minivan Open cargo area Blue
Van Enclosed cab Black
Van Enclosed cab Black
Pickup truck Wrecked Silver/Grey
Pickup truck Wrecked Silver
Jeep Spare wheel White
Jeep Spare wheel White
    Green
    Other
    Other
    Other

 

Large Vehicles

Designation Features Color
Truck AC vents Yellow
Truck AC Vents Yellow
Prime mover Enclosed box Red
Prime mover enclosed box Red
Light truck Wrecked Blue
Light truck Wrecked Truck Blue
Crane truck Soft shell box Black
Crane Soft Shell Box Black
Concrete mixer Truck Ladder Silver/Grey
Concrete Mixer truck Ladder Silver
Dedicated agricultural
vehicle
Open cargo area White
agriculture Open Cargo Area White
Minibus Harnessed to a cart Green
Minibus Harnessed to a cart Other
Bus Flatbed Other
Bus Flatbed Other
Tanker
tanker

Training Set

Participants will receive a training set, consisting of

  • training imagery - a folder containing 1,663 tiff and jpeg images.
  • train.csv - A CSV file contining 11,617 tagged vehicles. Each object is represented by a tag ID, an ID of the image in which it is located, and a bounding polygon, which is a set of 4 x-y (pixel) coordinates. Additionally, each object in the training set includes fine-grained classification data: class, subclass, features and color. Please note that features are represented as boolean fields while “-1” represent a non-viable option. 
Table_1

Table-1: Tags CSV file of the training-set

Test Set

Participants will also receive a test set, consisting of

  • test imagery - a folder containing 2,553 tiff and jpeg images.
  • test.csv - A CSV file containing 11,879 tagged vehicles. This file includes objects in the same form as in the training set (tag ID, image ID and a bounding polygon), but without the classification data.
Table_2

Table-2: CSV file of the test-set

File Submission

Participants are asked to accurately classify all tagged objects in the provided test set, according to the four classification labels (Class, Subclass, Features, and Color).

Participants are required to submit a CSV file (according to the format seen on table-3). In this file, each column represents a label (Class / Subclass / Feature / Color), and should contain the IDs of all objects in the set, sorted by probability. Hence, the object ID at the top of the list (the first row), is the one with the highest probability to belong to this label. In a similar way, the object ID at the bottom of the list (the last row), is the one with the lowest probability to belong to this label. Even if it is clear that a particular object does not belong in a particular label column - its ID must appear in each label column. We do not require to share the probabilities with us, only the order of the objects in each label column matters.

Table_3

Table-3: Submission file format

For example, let us imagine a dataset containing only three objects, and a participant who predicts that these objects are: 
ID1 -  A small vehicle - a yellow sedan with a sun-roof.
ID2 - A small vehicle - a black van, or maybe a minivan, but most likely a van.
ID3 - A large vehicle - a black truck with AC vents and a ladder, which might be wrecked, but probably not.      
To express these probabilities, the submission CSV file should look like that:
Table_2
 
Table-4: Submission example
Note that only few of the label columns are relevant (those that are colored blue), while the rest of the columns (not colored) are completely irrelevant to these three objects. Hence, in these columns, the order between these three specific objects is not important.  

 

Competition Phases

This competition has two phases: public and private.

In the public phase, participants are limited to five submissions per day, and submissions results are published on the competition leaderboard.

In the private phase, participants are asked to select submissions for final judging. Each participant may select up to three submissions for judging. Participants should save the models that generated the judged submissions (Winners will be asked to submit the generating models). These submissions will be used for final results determination. These submissions' grades will not be published on the public leaderboard, nor will they be available to the user or group who submitted them. Please note that this phase is only three days long. The dataset is similar for both phases.

Competition Forum

The competition forum is held on google groups (link). In order to avoid flooding of emails, we chose not to inform participants on every new forum activity, so make sure you check the forum for new threads, Q&A, etc.

Possible Classification Challenges

  • The ground resolution varies across images.
  • Due to human errors, the training set may contain up to 5% tag errors.
  • The training set doesn’t necessarily represent the “Test Set”, in that the number of objects in each of the various classes is unbalanced between the sets.
  • Some subclasses and features contain a small number of objects (dozens) in the training and test set, while some contain hundreds or more tagged objects.

Contact Information

team@mafatchallenge.com

Welcome to MAFAT Challenge - Fine-Grained Classification of Objects from Aerial Imagery 

Evaluation

For each label (each class, each subclass, each feature and each perceived color), an average precision index will be calculated separately. Then, a Quality Index will be calculated as the average of all average precision indices (Mean Average Precision).

Average Precision (per label)

The score will be calculated for each label separately according to the following formula:

K is the total number of objects in the test set (ground true).

Precision(k) is the precision calculated over the first k objects,

and rel(k) equals 1 if the assignment of the label to object k is True and 0 if the assignment is False.

Quality Index (Mean Average Precision)

After calculating the per label Average Precision, the final score will be determined using Mean Average Precision (MAP). Every label in the fine-grained classification has the same weight in the final score. Therefore, the weight of a small sample size label (e.g. minibus, 25 objects in the training dataset) is equal to a large sample size label (e.g sedan, 5783 objects in the training dataset). This index varies between 0 to 1 and emphasizes correct classifications with significance to confidence in each classification, meaning to distinguish between participants that classify all objects correctly, in all environmental conditions, as well as can reference their confidence in the classification.

The final score is:

When Nc is the number of labels.

Terms and Conditions

Competition Rules

Entry in this competition constitutes your acceptance of these official competition rules.

  1. Competition title: MAFAT Challenge - Fine-Grained Classification of Objects from Aerial Imagery.
  2. This Competition is organized by the Israeli Ministry Of Defense (“Competition Organizer”). Webiks shall assist the Competition Organizer with the execution of this competition including paying to the competition winners.
  3. This competition is public, but the Competition Organizer approves each user’s request to participate, and may not approve participance, according to its own considerations.
  4. Submission Format: CSV file containing participant’s predictions.
  5. Users: Each participant must create a CodaLab account to register. Only one account per user is allowed.
  6. If you are entering as a representative of a company, educational institution or other legal entity, or on behalf of your employer, these rules are binding on you, individually, and/or the entity you represent or are an employee. If you are acting within the scope of your employment, as an employee, contractor, or agent of another party, you warrant that such party has full knowledge of your actions and has consented thereto, including your potential receipt of a prize. You further warrant that your actions do not violate your employer’s or entity’s policies and procedures.
  7. Teams: Participants are allowed to form teams. There are no limitations on the number of participants in the team. You may not participate on more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted within 7 days of the competition deadline listed on the competition website. Participants who would like to form a team should advise ‘Competition Teams’ section in CodaLab’s ‘user_teams’ Wiki page. In order to form a valid team, the total submission count of all team’s participants must be less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running.
  8. Team mergers are allowed and can be performed by the team leader. Team merger requests will not be permitted within 7 days of the competition deadline listed on the competition Website. In order to merge, the combined team must have a total submission count less than or equal to the maximum allowed as of the merge date. The maximum allowed is the number of submissions per day multiplied by the number of days the competition has been running. The organizers don't provide any assistance regarding the team mergers.
  9. External data: You may use data other than the competition data to develop and test your models and submissions. However, any such external data you use for this purpose must be available for use by all other competition participants. Thus, if you use external data, you must make it publicly available and declare it in the competition discussion forum, no later than November 1st.
  10. Submissions may not use or incorporate information from hand labeling or human prediction of the training dataset or test dataset for the competitions target labels. Ergo: solutions involving human labeling of one of the columns in the submission csv file, will be disqualified.
  11. The delivered software code is expected to be capable of generating the winning submission and operate automatically on new unseen data without significant loss of performance.
  12.  

    The operation of the last will be checked prior to decision on the winning algorithm.
  13. Competition Duration: 3 months (September 1st to December 1st)
  14. Total Prize Amount (USD): $ 30,000
  15. Prize Allocation:
    1. 1st Place: $15,000
    2. 2nd Place: $10,000
    3. 3rd Place: $5,000
  16. Upon being awarded a prize:
    1. Prize winner must deliver to the Competition Organizer the final model’s software code as used to generate the winning submission and associated documentation written in English. The delivered software code must be capable of generating the winning submission and contain a description of resources required to build and/or run the executable code successfully.
    2. Prize winner must also deliver the software code packaged in docker.
    3. Prize winner must agree to an interview, in which the winning solution will be discussed.
    4. Prize winner will grant to Competition Organizer a non-exclusive license to the winning model’s software code and represent that you have the unrestricted right to grant that license.
    5. Prize winner will sign and return all prize acceptance documents as may be required by Competition Organizer.
  17. If a team wins a monetary prize, Competition Organizer will allocate the prize money in even shares between team members unless the team unanimously contacts the Competition Organizer within three business days following the submission deadline to request an alternative prize distribution.

Terms and Legal Considerations

  1. This competition is organized by the Israeli Ministry Of Defence. Therefore, participation in this competition is subjected to Israeli law.
  2. The Competitions is open worldwide, unless if you are a resident of Crimea, Cuba, Iran, Syria, North Korea, Sudan, Lebanon, Iraq or are subject to Israel export controls or sanctions as mentioned in the Israeli law.
  3. The competition is public, but the Competition Organizer may not approve participation, according to his own considerations.
  4. Competition Organizer reserves the right to disqualify any entrant from the competition if, in Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the Competition by cheating, deception, or other unfair playing practices.
  5. Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraud, or late. Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not meet the requirements.
  6. Officers, directors, employees and advisory board members (and their immediate families and members of the same household) of the competition organizers (Israel Ministry of Defense, Webiks) and their respective affiliates are not eligible to receive any prize in the competition.
  7. You agree to use reasonable and suitable measures to prevent persons who have not formally agreed to these rules from gaining access to the competition data. You agree not to transmit, duplicate, publish, redistribute or otherwise provide or make available the data to any party not participating in the competition. You agree to notify Competition Organizer immediately upon learning of any possible unauthorized transmission or unauthorized access of the data and agree to work with Competition Organizer to rectify any unauthorized transmission. You agree that participation in the competition shall not be construed as having or being granted a license (expressly, by implication, estoppel, or otherwise) under, or any right of ownership in, any of the data.
  8. By downloading the data for this competition you agree to the following terms:
    1. You will not distribute the data.
    2. You accept full responsibility for your use of the data and shall defend and indemnify the Competition Organizer, against any and all claims arising from your use of the data.
  9. By joining the competition, you warrant and acknowledge that you may not infringe any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition.
  10. The Competition Organizer reserves the right to verify eligibility and to adjudicate on any dispute at any time. If you provide any false information relating to the competition concerning your identity, residency, mailing address, telephone number, email address, ownership of right, or information required for entering the competition, you may be immediately disqualified from the competition.
  11. If you wish to publicly publish external data you may do so, provided that such public sharing does not violate the intellectual property rights of any third party. Adding and declaring external data is allowed no later than November 1st 2018. Adding external data later than this date, or using such data, is a cause for disqualification from the competition.
  12. Participants grant to Competition Organizer the right to use your winning submissions and the source code used to generate the submission, for any purpose whatsoever, without further approval.
  13. Prizes are subject to Competition Organizer’s review and verification of the entrant’s eligibility and compliance with these rules, and the compliance of the winning submissions with the submissions requirements.
  14. Prize winnings will be transferred to the winner by a third party.
  15. Participants that receive funding from Competition's Organizer, in favor of participating in the competition, may win the first, second or third place and appear on the leaderboard, but will not be able to win the competition prize (in which case, the prize will be awarded to the next best competitor).
  16. This competition does not constitute an obligation on behalf of the Israeli Ministry Of Defense, to either purchase products or to continue working with any of the participants.
  17. RIGHT TO CANCEL, MODIFY OR DISQUALIFY. Competition Organizer reserves the right at its sole discretion to terminate, modify or suspend the competition.
  18. For Israeli Ministry of Defense funded participants only: the knowledge and\or code presented by the participants of the competition is for the sole and exclusive use of the Competition Organizer. The Competition Organizer hereby commits not to transfer the knowledge and\or code to any third party for commercial use.

Public Test Phase

Start: Sept. 1, 2018, midnight

Description: We're now at the public test phase. Approved participants can submit their results and enter the public leader board. If your status is still 'pending' and yet to be 'approved', please check your email. You'll find an email sent from team@mafatchallenge.com with a link to a form you'll need to fill in order for us to be able to revise and approve your application. Once your application is approved, you will receive an email detailing the next steps in order to obtain the data set and start working on the challenge. *****When you wish to make a submission, please follow these steps: a) Create a submission file based on the submission format (the format can be downloaded from the 'Learn the Details'-->'File Submission' section). Make sure to name your file 'answer.csv'. b) Zip your csv file into a zip file named 'answer.zip'. c) Click 'submit' and upload the zipped file*****

Private Test Phase

Start: Nov. 27, 2018, midnight

Competition Ends

Dec. 1, 2018, midnight

You must be logged in to participate in competitions.

Sign In