Global Wheat Challenge 2020 - Localization and counting

Organized by etienne_david - Current server time: Nov. 30, 2020, 4:52 p.m. UTC

Current

Benchmark
Oct. 27, 2020, midnight UTC

End

Competition Ends
Never

Welcome!

The Global Wheat Challenge took place from 4th May to 4th August on the Kaggle platform : https://www.kaggle.com/c/global-wheat-detection. The codalab competition allow researchers from accross the world to benchmark their approach and share their results !

Description from Kaggle:

Open up your pantry and you’re likely to find several wheat products. Indeed, your morning toast or cereal may rely upon this common grain. Its popularity as a food and crop makes wheat widely studied. To get large and accurate data about wheat fields worldwide, plant scientists use image detection of "wheat heads"—spikes atop the plant containing grain. These images are used to estimate the density and size of wheat heads in different varieties. Farmers can use the data to assess health and maturity when making management decisions in their fields.

However, accurate wheat head detection in outdoor field images can be visually challenging. There is often overlap of dense wheat plants, and the wind can blur the photographs. Both make it difficult to identify single heads. Additionally, appearances vary due to maturity, color, genotype, and head orientation. Finally, because wheat is grown worldwide, different varieties, planting densities, patterns, and field conditions must be considered. Models developed for wheat phenotyping need to generalize between different growing environments. Current detection methods involve one- and two-stage detectors (Yolo-V3 and Faster-RCNN), but even when trained with a large dataset, a bias to the training region remains.

The Global Wheat Head Dataset is led by nine research institutes from seven countries: the University of Tokyo, Institut national de recherche pour l’agriculture, l’alimentation et l’environnement, Arvalis, ETHZ, University of Saskatchewan, University of Queensland, Nanjing Agricultural University, and Rothamsted Research. These institutions are joined by many in their pursuit of accurate wheat head detection, including the Global Institute for Food Security, DigitAg, Kubota, and Hiphen.

In this competition, you’ll detect wheat heads from outdoor images of wheat plants, including wheat datasets from around the globe. Using worldwide data, you will focus on a generalized solution to estimate the number and size of wheat heads. To better gauge the performance for unseen genotypes, environments, and observational conditions, the training dataset covers multiple regions. You will use more than 3,000 images from Europe (France, UK, Switzerland) and North America (Canada). The test data includes about 1,000 images from Australia, Japan, and China.

Wheat is a staple across the globe, which is why this competition must account for different growing conditions. Models developed for wheat phenotyping need to be able to generalize between environments. If successful, researchers can accurately estimate the density and size of wheat heads in different varieties. With improved detection farmers can better assess their crops, ultimately bringing cereal, toast, and other favorite dishes to your table.

Evaluation Criteria

Expected format

The submission has to be a .zip folder containing 4 prediction COCO json (as defined in https://cocodataset.org/#format-results) for each domain (utokyo_1.json ; utokyo_2.json ; nau_1.json ; uq_1.json) and one csv called "count.csv" with three columns: "session", "image_name", "count". The image ids can be found in the correspondance.csv on github.

If you use only localization algoritm, you can generate the counting csv with the following script on github. If you want to compete only on counting, it's possible to submit only the csv, within a .zip folder. 

Metrics

The evaluation is divided in two group of metrics: localization and counting.

Three differents metrics are proposed for localization: the mAP@0.5, the accuracy and Kaggle's Accuracy. mAP@0.5 is implemented with pycocotools. Accuracy is the ratio of True Positive on the sum of True Positive, False Negative and False positive. A true positive is counted when a single predicted object matches a ground truth object with an IoU above 0.5 . Kaggle's Accuracy is an extension of Accuracy, and is defined here: https://www.kaggle.com/c/global-wheat-detection/overview/evaluation.

Root-Mean-Squared-Error RMSE is proposed for counting and is implemented from scikit-learn.

The evaluation code can be reviewed on github.

The evaluation set is composed of four different domains: nau_1, utokyo_1, utokyo_2 and uq_1. The metrics are calculated for each domain, and then the mean is computed.

Post-processing 

In the Global Wheat Head Dataset, wheat head on the border are only labeled if more than 30% of the wheat head is here. This situation can be ambiguous. To solve this issue for the localization challenge, wheat head on the border are removed.  The postprocessing script used for the reference data is available on GitHub. To save computing power from the server side, it is expected that each competitor process their solutions with the same script.

 Still have a problem ? Post an issue on github ! :https://github.com/EtienneDavid/GWC-codalab-public

Terms and Conditions

Manual label are forbidden. 

Benchmark

Start: Oct. 27, 2020, midnight

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 etienne_david 0.7563