Million-AID Multi-label Scene Classification

Organized by YangLong - Current server time: March 27, 2025, 4:18 a.m. UTC

Current

Multi-label classification
Oct. 10, 2021, midnight UTC

End

Competition Ends
Never

Million-AID Multi-label Classification

Million-AID is a large-scale dataset for scene parsing in aerial images. It can be used to develop and evaluate aerial scene classification algorithms.  Over the past few years, most efforts have been paid to classify an image into one scene category, while in real-world scenarios, it is more often that a single image contains multiple scenes. This challenge aims to develop and test intelligent interpretation algorithms for the task of multi-label aerial scene classification, which requires recognizing multiple semantic categories that characterize an aerial image in Million-AID. 

You can find detailed information about Million-AID employed in the challenge. In particular, visit the following pages for FAQ:

Citation

If you make use of Million-AID, please cite our following papers:

@article{Long2021DiRS,
title={On Creating Benchmark Dataset for Aerial Image Interpretation: Reviews, Guidances and Million-AID},
author={Yang Long and Gui-Song Xia and Shengyang Li and Wen Yang and Michael Ying Yang and Xiao Xiang Zhu and Liangpei Zhang and Deren Li},
journal={IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing},
year={2021},
volume={14},
pages={4205-4230}
}

@misc{Long2022ASP,
title={Aerial Scene Parsing: From Tile-level Scene Classification to Pixel-wise Semantic Labeling}, 
author={Yang Long and Gui-Song Xia and Liangpei Zhang and Gong Cheng and Deren Li},
year={2022},
eprint={2201.01953},
archivePrefix={arXiv},
primaryClass={cs.CV}
}

 

Dataset Description

The multi-label scene classification for this challenge requires participants to distinguish images with similar semantic content from the massive images, and assign an aerial image in Million-AID multiple scene labels.

There are over 1M scene instances with 73 semantic scene categories in Million-AID. The scene names and corresponding indices for this challenge include: (0) apron, (1) church, (2) transportation_land, (3) detached_house, (4) meadow, (5) substation, (6) parking_lot, (7) basketball_court, (8) mobile_home_park, (9) desert, (10) grassland, (11) religious_land, (12) island, (13) railway_area, (14) bare_land, (15) ground_track_field, (16) golf_course, (17) water_area, (18) power_station, (19) lake, (20) quarry, (21) railway, (22) mining_area, (23) bridge, (24) cemetery, (25) sports_land, (26) pier, (27) highway_area, (28) oil_field, (29) solar_power_plant, (30) commercial_area, (31) woodland, (32) intersection, (33) apartment, (34) stadium, (35) greenhouse, (36) public_service_land, (37) special_land, (38) train_station, (39) arable_land, (40) wastewater_plant, (41) baseball_field, (42) commercial_land, (43) storage_tank, (44) unutilized_land, (45) wind_turbine, (46) river, (47) sparse_shrub_land, (48) residential_land, (49) orchard, (50) dry_field, (51) dam, (52) port_area, (53) factory_area, (54) roundabout, (55) airport_area, (56) beach, (57) viaduct, (58) forest, (59) works, (60) road, (61) runway, (62) swimming_pool, (63) tennis_court, (64) helipad, (65) ice_land, (66) rock_land, (67) paddy_field, (68) mine, (69) leisure_land, (70) terraced_field, (71) industrial_land, (72) agriculture_land.

Dataset Download

OneDrive: Million-AID Download

The Million-AID images are collected from the Google Earth. All the images are sampled with RGB channels stored in "jpg" format. The use of the Google Earth images must respect the "Google Earth" terms of use .

 

Data license

Creative Commons Licence This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Feel free to contact us if you have any questions or need clarification regarding the rules of the challenge or the licensing of the data.

Evaluation Criteria

The multi-label scene classification for this challenge requires participants to distinguish images with similar semantic content from the massive images, and assign an aerial image in Million-AID multiple scene labels.

Submission Format

The participant need to submit a zip file containing classification results for all test images in Million-AID. The classification results are stored in a text file named "answer.txt", where the names of test images, predicted category indices, and corresponding confidence values are indicated and separated by the space(s). The results are organized in the following format:

image_name  category_index1,confidence1  category_index2,confidence2  category_index3,confidence3
image_name  category_index1,confidence1  category_index2,confidence2  category_index3,confidence3
image_name  category_index1,confidence1  category_index2,confidence2  category_index3,confidence3
...

An submission example for multi-label scene classification on Million-AID

Evaluation Protocol

To comprehensively evaluate the performance of different algorithms, we adopt the average precision (AP) on each category and mean average precision (mAP) over all categories for evalation. The AP summarizes a precision-recall curve as the weighted mean of precisions achieved at each threshold, with the increase in recall from the previous threshold used as the weight: AP = Σn(Rn-Rn-1)Pn, where Pn and Rn are the precision and recall at the n-th threshold. Thus, the mAP can be obtained by averagning the APs over all categories. 

We also present the precision, recall, and F1-measure for further comparison. Concretely, we adopt the overall precision, recall, F1-measure (OP, OR, OF1) and per-class precision, recall, F1-measure (CP, CR, CF1), which are defined as below: 

OP = ΣiNci / ΣiNpi ,   OR = ΣiNci / ΣiNgi ,   OF1 = 2*OP*OR / (OP+OR)

CP = (1/C)*Σi(Nci / Npi) ,   CR = (1/C)*Σi(Nci / Ngi) ,   CF1 = 2*CP*CR / (CP+CR)

where C is the number of labels, Nci is the number of images that are correctly predicted for the i-th label, Npi is the number of predicted images for the i-th label, Ngi is the number of ground truth images for the i-th label. We present the OP, OR, OF1 and CP, CR, CF1 metrics under the setting that a label is predicted as positive if its estimated probability is greater than 0.5. Among these metrics, mAP, OF1, and CF1 are the most important metrics that can provide a more comprehensive evaluation.

 

 

Multi-label classification

Start: Oct. 10, 2021, midnight

Description: Evaluation for Multi-label aerial scene classification on Million-AID. Each aerial image should be assigned with one or several semantic labels. Results are evaluated with a confidence threshold of 0.5.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 YangLong 0.0373