Welcome to the Robotic Sensor Network Laboratories (RSN) MinneApple segmentation challenge. This challenge is only one of our computer vision for precision agriculture challenges. For our challenge on fruit segmentation and fruit counting, please visit the respective websites.
Yield mapping in orchard environments from RGB images is a challenging and important problem, where many current states of the art algorithms fail. We want to push state of the art for computer vision algorithms that can handle large numbers of small fruits in cluttered and occluded outdoor environments.
The competition uses the recently released MinneApple dataset, consisting of roughly 1000 annotated images for fruit detection and segmentation and 60000 images for patch-based fruit counting. The dataset contains a large number of different scenarios, with varying varieties of apple, illumination conditions, and occlusion scenarios. We provide 631 images for training/validation, and the rest are used for testing. Participants are encouraged to generate their own training/validation splits form the data for which we provide labels. We do not make the labels for the testing set available to ensure fair competition.
Please make sure to follow the submission instructions in the Evaluation section.
Additional information can be found on the project webpage and Github.
Competition Organizers:
The following metrics are used for characterizing the performance of detection methods on MinneApple. The challenge winner is determined based on the highest Average Precision (AP) score. An overview over these metrics can be found in [Shelhamer PAMI 2016]
The server expects a single ZIP archive containing a single text file with your results. Make sure that the name of the images in the text file is the same as the name of the original image file. The structure of your ZIP file should look like this:
results.zip--|
|--results.txt
The results.txt file should contain for each line a filename,bounding box coordinates and a confidence score separated by commas as such:
results.txt--|
|--image_name_1.png,x1,y1,x2,y2,score1
|--...
Bounding boxes are defined by two points in the upper left corner and the lower right corner.
The images and annotations in this dataset belong to the Robotic Sensor Network Laboratory at the University of Minnesota and are licensed under an Attribution-NonCommercial-ShareAlike 3.0 United States license.
Start: Nov. 1, 2019, midnight
Never
You must be logged in to participate in competitions.
Sign In# | Username | Score |
---|---|---|
1 | zhenek | 0.482 |
2 | Luis_Cossio | 0.482 |
3 | CallumClark | 0.441 |