Leaf Segmentation Challenge (LSC)

Organized by Hanno - Current server time: March 7, 2025, 10:15 a.m. UTC

Previous

Training: LSC as of CVPPP2017
Feb. 22, 2018, midnight UTC

Current

Testing: LSC as of CVPPP2017
Feb. 23, 2018, midnight UTC

End

Competition Ends
Never

Please note: Submissions have been deactivated on this server. For new submissions, please see the current LSC version on the new CodaLab server.

 

Welcome to the Leaf Segmentation Challenge

This is the CodaLab version of the Leaf Segmentation Challenge from the CVPPP2017, the third workshop on Computer Vision Problems in Plant Phenotyping held in conjunction with ICCV2017. Submissions have been deactivated on this server. For new submissions, please see the current LSC version on the new CodaLab server. We set up this CodaLab version to meet the communities high interest in these challenges, as e.g. visible in the high download numbers (see Fig.1). For further information please refer to our dataset page.

Fig. 1. Strong growth in downloads.

To advance the state of the art in leaf segmentation and to demonstrate the difficulty of segmenting all leaves in an image of plants, we organized the Leaf Segmentation and Counting Challenges (LSC and LCC). This is the 3rd LSC after the successful LSC 2014 and 2015 and the 2nd LCC. Examples of methods stemming from these challenges or using the data are http://link.springer.com/article/10.1007/s00138-015-0737-3 , https://arxiv.org/abs/1605.09410 , https://arxiv.org/abs/1511.08250 ). The major difference of the 2017 challenge is the expansion of the data that we focus on leaf segmentation accuracy and as such ground truth foreground segmentation masks are provided for training and testing.

For the challenges we release training sets (containing raw images and annotations) and testing sets (containing raw images, only).

How to participate

Please read first the challenge terms and conditions

  1. If you do not have the training data please download them from here: How to download data or download the data from the 'Participate' tab, once you registered to the challenge.
  2. The provided data has been collected in our laboratories (datasets A1 -- A3) or derived from a public dataset (A4, public data kindly shared by Dr Hannah Dee from Aberystwyth) of top-view images of rosette plants. All images were hand labelled. The archive contains evaluation functions (in MATLAB and Python) for comparing segmentation and counting outcomes between ground truth and algorithm results. The MATLAB function works with folders containing images files, as described in the original challenge call document LSC 2017. The Python version has beed designed to work with CodaLab and uses HDF5 files, where the original folder structure has been reproduced.
  3. Training and testing results can be submitted independently. Results need to be submitted as a single zipped HDF5 file with the extension '.h5'. The folder structure in the HDF5 file needs to be exactly as in the provided HDF5 files. Please examine the provided 'sample_submission.h5' file. The participants descide by button click, which of their submitted results should be presented at the leaderboards.

About the data

We share images of tobacco plants and arabidopsis plants (download links see 'Participate' tab). Tobacco images were collected using a camera which contained in its field of view a single plant. Arabidopsis images were collected using a camera with a larger field of view encompassing many plants, which were cropped. The images released are either from mutants or wild types and have been taken in a span of several days. Plant images are encoded as tiff files.

All images were hand labelled to obtain ground truth masks for each leaf in the scene. These masks are image files encoded in PNG where each segmented leaf is identified with a unique integer value, starting from 1, where 0 is background. For the counting problem, annotations are provided in the form of a png image where each leaf center is denoted by a single pixel. Additionally a CSV file with image name and number of leaves is provided.

For further information on the ground truth annotation process, please refer to:

  1. M. Minervini, A. Fischbach, H.Scharr, and S.A. Tsaftaris. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters, pages 1-10, 2015, doi:10.1016/j.patrec.2015.10.013 [PDF] [BibTex]
  2. Hanno Scharr, Massimo Minervini, Andreas Fischbach, Sotirios A. Tsaftaris. Annotated Image Datasets of Rosette Plants. Technical Report No. FZJ-2014-03837, Forschungszentrum Jülich, 2014
  3. Bell, Jonathan, & Dee, Hannah M. (2016). Aberystwyth Leaf Evaluation Dataset [Data set]. Zenodo. http://doi.org/10.5281/zenodo.168158

or the challenge documents on LSC 2017 or LCC 2017.

 

File types and naming conventions:

Originally, plant images were encoded as PNG files and their size vary. Plants appear centered in the cropped image. Segmentation masks are image files encoded in PNG where each segmented leaf is identified with a unique (per image) integer value, starting from 1, where 0 is background. A color index palette is included within the file for visualization reasons.  The filenames have the form:

plantXXX_rgb.png is the raw color image in RGB
plantXXX_label.png  is
the labeled image as indexed PNG file

plantXXX_fg.png is the foreground (plant segmentation) as binary PNG file

where XXX is a 3 or 4  digit integer number. Note that plants are not numbered continuously.

If you are interested to work with this format, please visit https://www.plant-phenotyping.org/CVPPP2017-challenge for further information. For CodaLab the images have been stored in a few HDF5 files. The folder structure in the HDF5 files resembles the original folder structure:

In the training images file as well as in the testing images file one finds

AY/plantXXX/rgb : the raw color image in RGB
AY/plantXXX/fg : the foreground (plant segmentation)

In the training truth file

AY/plantXXX/label : the labeled image

where Y is a number between 1 and 4 (training set) or 5 (testing set), and again XXX is a 3 or 4  digit integer number. Note that plants are not numbered continuously.

Training set

We provide 27 images of tobacco and 783 Arabidopsis images and label images to the registered users.

Testing set

Here, we will not share ground truth leaf segmentations. We share two different versions of the testing set:

1.    [SPLIT] images are split according to the origin i.e. following the A1,…, A4 nomenclature.

2.    [WILD] images are included in one folder (A5) only and may vary in size. This tries to emulate a leaf counting in the wild scenario where data from different sources are pooled in the testing phase.  If you want to perform well in this testing set we advise that you aim to pool data from A1 to A4 together.

Please note that IT IS STRICTLY FORBIDDEN to attempt to use the testing set in any other manner, e.g., to label testing data for improved training, to check algorithmic performance visually on the testing data, etc.  The organizers reserve the right to release a new testing set prior to the challenge for verifying the reported average performance of participants.

Evaluation Criteria

Please note: Submissions have been deactivated on this server. For new submissions, please see the current LSC version on the new CodaLab server.

Here we use an updated Python version of the original evaluation function LSC_evaluation.m (in MATLAB) which we share with you in the Matlab archive for comparing segmentation outcomes between ground truth and algorithm results. The function uses the Dice function to evaluate segmentation results. It returns the following measures

Function name

Purpose

BestDice:

Best DICE score among all objects (leaves)

to estimate average leaf segmentation accuracy

SBD:

Symmetric best DICE score among all objects (leaves)

to estimate average leaf segmentation accuracy. Currently only available in the scores files.

FGBGDice:

DICE on the foreground mask (i.e. the whole plant assuming the union of all labels different than background)

to estimate how good the algorithm identifies plant from background. Note this metric will not be used for evaluation.We are not going to care for foreground segmentation quality for LSC 2017, since ground truth masks are made available.

AbsDiffFGLabels:

Returns the absolute difference in object count, as number of leaves of the algorithm’s results minus the ground truth

to estimate how good the algorithm is in identifying the correct number of leaves present

DiffFGLabels:

Returns the difference in object count, as number of leaves of the algorithm’s results minus the ground truth

to estimate how good the algorithm is in identifying the correct number of leaves present

Once submitted results have been evaluated by the system, you may want to download the additional scoring files provided by the scoring program. You will find the link Download output from scoring step at the bottom of the page, where you submitted your results.

Terms and Conditions

  • All the data made available for the CVPPP 2017 Challenges (LSC and LCC) can only be used to generate a submission for these challenges, here.
  • Results submitted here, can be published (as seen appropriate by the organizers) through different media including this website and journal publications.

Please note, that when using the data (images, labels, and/or evaluation results etc.) provided here, it is mandatory to cite the following papers which originally provided the data:

  1. M. Minervini, A. Fischbach, H.Scharr, and S.A. Tsaftaris. Finely-grained annotated datasets for image-based plant phenotyping. Pattern Recognition Letters, pages 1-10, 2015, doi:10.1016/j.patrec.2015.10.013 [PDF] [BibTex]
  2. Hanno Scharr, Massimo Minervini, Andreas Fischbach, Sotirios A. Tsaftaris. Annotated Image Datasets of Rosette Plants. Technical Report No. FZJ-2014-03837, Forschungszentrum Jülich, 2014
  3. Bell, Jonathan, & Dee, Hannah M. (2016). Aberystwyth Leaf Evaluation Dataset [Data set]. Zenodo. http://doi.org/10.5281/zenodo.168158

These guidelines follow those established by challenges in biomedical image analysis such as example 1 and example 2.

 

Training: LSC as of CVPPP2017

Start: Feb. 22, 2018, midnight

Description: LSC as of CVPPP2017: evaluation of the training data

Testing: LSC as of CVPPP2017

Start: Feb. 23, 2018, midnight

Description: LSC as of CVPPP2017: evaluation of the testing data

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 lds 0.41
2 Fr_AB 0.41
3 awolny 0.42