In this competition, MAFAT DDR&D (Directorate of Defense Research & Development) would like to tackle the challenge of automatically exploiting fine-grained information from aerial imagery data. As the volume of imagery gathered by aerial sensors is rapidly growing, we understand that the exploitation of such data could not be achieved solely by a manual image interpretation process. The competition’s objective is to explore automated solutions that will enable fine-grained classification of objects in high resolution aerial imagery.
Participants’ goal is to classify different objects found in aerial imagery data. The classification includes a coarse-grained classification for main classes (for example Large vehicle) and fine-grained classification of subclasses and unique features (for example sunroof).
1st Prize: $15,000
2nd Prize: $10,000
3rd Prize: $5,000
The dataset consists of aerial imagery taken from diverse geographical locations, different times, resolutions, area coverage and photo conditions (weather, angles, and lighting). Image resolution varies between 5cm to 15cm GSD (Ground Sample Distance).
Few examples are presented below:
As can be seen, images include many different types of objects, such as: vehicles, roads, buildings, trees, etc.
Participants would be asked to classify objects in four granularity levels:
Here is a full description of general classes tagging information:
|Minivan||Open cargo area||Blue|
|Prime Mover||Enclosed Box||Red|
|Crane Truck||Soft Shell Box||Black|
|Concrete Mixer Truck||Ladder||Silver/Grey|
|Open Cargo Area||White|
|Minibus||Harnessed to a Cart||Green|
Table-1: Tags CSV file of the training-set
Table-2: CSV file of the test-set
Participants are asked to accurately classify all tagged objects that appear in the provided set, according to the four classification categories (Class, Subclass, Features, and Color).
The submission file should include all the tagged objects that appear in the test-set in each category. For each category, the list of tagged objects should be sorted by object's confidence level (high to low). Objects that do not belong to this category should have a negligible probability and the ranking between them is insignificant.
This format is presented in Table-3.
Table-3: Submission file format
This competition has two phases: public and private.
In the public phase, submission limit is five per day, and submissions will be published on the competition leaderboard.
In the private phase, a total of three submissions is allowed, and these submissions' grades will not be published on the leaderboard, nor will they be available to the user or group who submitted them. Please note that this phase is only three days long
The dataset is similar for both phases
The contest forum is held on google groups. Please send us a preferred email address for your participation in the forum (to email@example.com), and we will add you. Because we prefer not to flood our participants with emails, we chose not to inform you on new forum threads, so checking the forum for new threads, Q&A etc. is your responsibility.
For each category, an average precision index will be calculated separately. Then, a Quality Index will be calculated as the average of all average precision indices (Mean Average Precision).
The score will be calculated for each category separately according to the following formula:
K is the total number of objects from the class in the test data (ground true).
Precision(k) is the precision calculated over the first k objects.
and rel(k) equals 1 if the object k is True and 0 if the object is False
After calculating the per category Average Precision, the total score will be determined using Mean Average Precision (MAP). Every category in the fine-grained classification has the same weight in the total score. Every category in the fine-grained classification has the same weight in the total score. therefore the weight of a small sample size category (e.g. minibus, 25 objects in the training dataset) is equal to a large sample size category (e.g sedan, 5783 objects in the training dataset). This index varies between 0 to 1 and emphasizes correct classifications with significance to confidence in each classification, meaning to distinguish between participants that classify all objects correctly, in all environmental conditions, as well as can reference their confidence in the classification.
The total score is:
When Nc is the number of categories.
Start: Sept. 1, 2018, midnight
Description: Dear participant, The competition is almost good to go. It officially launches on September 1st, 2018. please do not try to submit anything before that date. We will start revising applications to join the competition on August 26. Once your application is approved, you will receive an email detailing the next steps in order to obtain the dataset and start working on the challenge. Once approved, you'll have access to a dataset containing more than 1,600 high-resolution images, with more than 11,600 (fine-grained) classified objects! Thank you and good luck, The Competition Organizing Team
Start: Nov. 27, 2018, midnight
Dec. 1, 2018, midnight
You must be logged in to participate in competitions.Sign In