In this competition, MAFAT DDR&D (Israel Directorate of Defense Research & Development) would like to tackle the challenge of automatically exploiting fine-grained information from aerial imagery data. As the volume of imagery gathered by aerial sensors is rapidly growing, we understand that the exploitation of such data could not be achieved solely by a manual image analysis process. The competition’s objective is to explore automated solutions that will enable fine-grained classification of objects in high-resolution aerial imagery.
Participants’ goal is to classify different objects found in aerial imagery data. The classification includes a coarse-grained classification for main classes (for example - large vehicle) and fine-grained classification of subclasses and unique features (for example - a car that has a sunroof).
1st Place: $15,000
2nd Place: $10,000
3rd Place: $5,000
The dataset consists of aerial imagery taken from diverse geographical locations, different times, resolutions, area coverage and image acquisition conditions (weather, sun direction, camera direction, etc). Image resolution varies between 5cm to 15cm GSD (Ground Sample Distance).
Few examples are presented below:
Participants are asked to classify objects in four granularity levels:
Here is a full description of the competition dataset's tagging hierarchy:
|Minivan||Open cargo area||Blue|
|Prime mover||Enclosed box||Red|
|Crane truck||Soft shell box||Black|
|Concrete mixer Truck||Ladder||Silver/Grey|
|Open cargo area||White|
|Minibus||Harnessed to a cart||Green|
Table-1: Tags CSV file of the training-set
Table-2: CSV file of the test-set
Participants are asked to accurately classify all tagged objects in the provided test set, according to the four classification labels (Class, Subclass, Features, and Color).
Participants are required to submit a CSV file (according to the format seen on table-3). In this file, each column represents a label (Class / Subclass / Feature / Color), and should contain the IDs of all objects in the set, sorted by probability. Hence, the object ID at the top of the list (the first row), is the one with the highest probability to belong to this label. In a similar way, the object ID at the bottom of the list (the last row), is the one with the lowest probability to belong to this label. Even if it is clear that a particular object does not belong in a particular label column - its ID must appear in each label column. We do not require to share the probabilities with us, only the order of the objects in each label column matters.
Table-3: Submission file format
This competition has two phases: public and private.
In the private phase, participants are asked to select submissions for final judging. Each participant may select up to three submissions for judging. Participants should save the models that generated the judged submissions (Winners will be asked to submit the generating models). These submissions will be used for final results determination. These submissions' grades will not be published on the public leaderboard, nor will they be available to the user or group who submitted them. Please note that this phase is only three days long. The dataset is similar for both phases.
The competition forum is held on google groups (link). In order to avoid flooding of emails, we chose not to inform participants on every new forum activity, so make sure you check the forum for new threads, Q&A, etc.
The score will be calculated for each label separately according to the following formula:
K is the total number of objects in the test set (ground true).
Precision(k) is the precision calculated over the first k objects,
After calculating the per label Average Precision, the final score will be determined using Mean Average Precision (MAP). Every label in the fine-grained classification has the same weight in the final score. Therefore, the weight of a small sample size label (e.g. minibus, 25 objects in the training dataset) is equal to a large sample size label (e.g sedan, 5783 objects in the training dataset). This index varies between 0 to 1 and emphasizes correct classifications with significance to confidence in each classification, meaning to distinguish between participants that classify all objects correctly, in all environmental conditions, as well as can reference their confidence in the classification.
The final score is:
When Nc is the number of labels.
Entry in this competition constitutes your acceptance of these official competition rules.
The operation of the last will be checked prior to decision on the winning algorithm.
If a team wins a monetary prize, Competition Organizer will allocate the prize money in even shares between team members unless the team unanimously contacts the Competition Organizer within three business days following the submission deadline to request an alternative prize distribution.
Start: Sept. 1, 2018, midnight
Description: We're now at the public test phase. Approved participants can submit their results and enter the public leader board. If your status is still 'pending' and yet to be 'approved', please check your email. You'll find an email sent from firstname.lastname@example.org with a link to a form you'll need to fill in order for us to be able to revise and approve your application. Once your application is approved, you will receive an email detailing the next steps in order to obtain the data set and start working on the challenge. *****When you wish to make a submission, please follow these steps: a) Create a submission file based on the submission format (the format can be downloaded from the 'Learn the Details'-->'File Submission' section). Make sure to name your file 'answer.csv'. b) Zip your csv file into a zip file named 'answer.zip'. c) Click 'submit' and upload the zipped file*****
Start: Nov. 27, 2018, midnight
Dec. 1, 2018, midnight
You must be logged in to participate in competitions.Sign In