1st ACRE Cascade Competition

Organized by acre_account - Current server time: Oct. 23, 2020, 9:32 p.m. UTC

Current

Development
Oct. 17, 2020, midnight UTC

Next

Final
Jan. 15, 2021, midnight UTC

End

Competition Ends
Jan. 22, 2021, midnight UTC

Welcome to the 1st ACRE Cascade Competition!

In this competition, ACRE organizers ask you to segment RGB images to distinguish between crop, weeds, and background.

Overview

ACRE is the Agri-food Competition for Robot Evaluation, part of the METRICS project funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 871252. Autonomous robots compete to demonstrate their ability to perform agricultural tasks (such as removing weeds or surveying crops down to individual-plant resolution). At field campaigns, participants collect data that are then made available for online competitions (Cascade Campaigns) like the one you are seeing. For more information about ACRE and METRICS visit the official website.

After years of decline, the number of undernourished people began to slowly increase again in 2015. Food Security requires that everyone can have enough food produced in a sustainable manner. The topic is increasingly gaining attention as food scarcity is worsened by a continuously growing population. Also, food production is threatened by climate change. The topic is so relevant that is part of one of the 17 Sustainable Development Goals of the UN 2030 Agenda. In particular, Food Security is a pillar of SDG number 2, Zero Hunger.

In this context, the agricultural sector is going under a process of revolution by the introduction of digital technologies. The Digital Agricultural Revolution can help to reduce the use of resources (water, fertilizers, and pesticides), thus diminishing the environmental contamination and the costs for the farmers. Also, it could increase the climate resilience of crops and their productivity.

Automatic crop and weed segmentation can be a driver of innovations to optimize the agricultural processes. Indeed, automatic weed detection can be exploited by a ground robot for mechanical weeding. Thus, pesticides could even be completely avoided. By joining this challenge you can contribute to advance the application of digital technologies in agriculture. We hope you will enjoy the competition :)

2-stage competition

This is a 2-stage competition. 

  • Stage 1 - Development: in this stage, participants are required to train their models on the Training set and submit predictions of the Test_Dev set. The submissions will be evaluated and the leaderboard results will be updated. At the end of the Development stage, we will release the labels of the Test_Dev set and the new, unseen and unlabeled, Test set.
  • Stage 2 - Final: in this stage, participants are required to submit predictions of the new Test set. During this stage, the leaderboard results will not be updated. 

At the end of Final stage, winners will be announced and the final leaderboard will be shown.

Winners

We will nominate multiple winners*:

  • ACRE Winner: the team that best perform on the entire dataset (Global IoU on the leaderboard)
  • Haricot Winner: the team that best perform on haricot images (Global IoU Haricot on the leaderboard)
  • Maize Winner: the team that best perform on maize images (Global IoU Maize on the leaderboard)
  • BIPBIP Winner: the team that best perform on BIPBIP team images (IoU Bipbip on the leaderboard)
  • PEAD Winner: the team that best perform on PEAD team images (IoU Pead on the leaderboard)
  • ROSEAU Winner: the team that best perform on ROSEAU team images (IoU Roseau on the leaderboard)
  • WeedElec Winner: the team that best perform on WeedElec team images (IoU Weedelec on the leaderboard)

*The first three winners of each above-mentioned category will be required of their executable code. Instructions for the submission of the code will be provided later during the competition.

Attention!

The results on the Test set and the winners will be revealed from the 30 of January 2021.

How to participate

To participate in the competition follow these steps:

  1. Apply for the competition using the ACRE challenge application form.
  2. Register for the competition with your CodaLab user under the competition Participate tab.

 

Metric overview

Submissions are evaluated on the mean Intersection over Union (IoU) obtained on the two classes, crop and weed. IoU is typically used in segmentation tasks and it essentially quantifies the percentage of overlap between predicted and target segmentations (see image below).

iou-equation

 

For example (single-class IoU),

ground truth (gt) mask:
1 1 1 0 1 1
0 0 0 1 1 1
1 1 1 0 0 0

predicted segmentation:
1 1 1 0 1 1
0 0 0 0 0 0
1 1 1 0 0 0

|intersection| = 8
|union| = 11
IoU = 8 / 11 = 0,73

 

IoU is computed for each target class (crop and weed) separately, by considering prediction and ground truth as binary masks. Then, the final IoU is computed by averaging the two.  Thus, we have the following formulation

IoU(crop) = TP(crop) / (TP(crop) + FP(crop) + FN(crop))
IoU(weed) = TP(weed) / (TP(weed) + FP(weed) + FN(weed))
IoU = (IoU(crop) + IoU(weed)) / 2

Leaderboard

The values shown in the leaderboard refer to the IoU computed globally for each team, for each crop, and for all the images. 

Thus, the following quantities are reported:

  • Bipbip team:
    • Global Bipbip-Haricot IoU
    • Global Bipbip-Maize IoU
    • Global Bipbip IoU *
  • Pead team:
    • Global Pead-Haricot IoU
    • Global Pead-Maize IoU
    • Global Pead IoU *
  • Roseau team
    • Global Roseau-Haricot IoU
    • Global Roseau-Maize IoU
    • Global Roseau IoU *
  • Weedelec team
    • Global Weedelec-Haricot IoU
    • Global Weedelec-Maize IoU
    • Global Weedelec IoU *
  • Global IoU Haricot *
  • Global IoU Maize *
  • Global IoU *

*Participants that will rank highest in these columns will be the seven nominated winners.

Attention!

Scores on the Leaderboard will update after each submission. This means that your best scores will be overwritten also in case of a lower score. 

Submission instructions

To have lighter submission files, we request you to submit the Run-Length encoding (RLE) of the PNG masks. In particular, you are requested to upload your submission as a zip file containing a submission.json  file. The JSON file must contain the RLE-encoded segmentation for each test image. In particular, it must have the following structure: 

  • image name (without any extension, e.g., png, jpg, etc.)
    • shape: shape of the original image as list [Height, Width]
    • team: team of the original image. One in {'Bipbip', 'Pead', 'Roseau', 'Weedelec'}
    • crop: crop of the original image. One in {'Haricot', 'Mais'}
    • segmentation: prediction (as a dict)
      • crop: RLE encoded crop segmentation (no weed)
      • weed: RLE encoded weed segmentation (no crop)

This is repeated for each prediction in the test set, i.e., 

{'image_name_1': 
      {'shape': ...,
        'team': ..., 
        ...
      },
 'image_name_2': 
      {'shape': ...,
        'team': ..., 
        ....
      }, 
 ...,
 'image_name_N':
     {'shape': ...,
       'team': ..., 
        ....
      }
}

A script to convert predictions to RLE strings is provided into the starting_kit. The starting kit is provided in the form of a tutorial to convert RGB masks to labels and to prepare submissions, included the RLE encoding script. Please execute the scripts contained in the starting kit in the following order:

  1. read_mask_example.py
  2. prepare_submission.py
  3. decode_rle_example.py

Example data used by the scripts in the starting kit consists of a single example RGB mask ('rgb_mask_example.png'). You will find the starting kit, as long as the competition data under the Participate tab once your registration will be approved.

Attention!

  • During Stage 1 (Development), you have a maximum number of submissions per day of 2, and a maximum number of total submissions of 150.
  • Any submission in Stage 2 (Final) must be provided with a description of the method used by filling the dialog box under the Submit tab.

Dataset overview

The dataset is composed of images captured by different sensors in different moments and are about two kinds of crops: haricot and maize. Data comes from the 2019 ROSE Challenge where four teams have competed with agricultural robots. Each team has collected images of the same two crops, but in different moments and with different sensors (RGB cameras).

Images in the dataset are divided into different folders based on the team that acquired the image, i.e., Bipbip, Pead, Roseau, Weedelec. For each team, we have two different sub-folders named as the type of crop present in the images, i.e., Haricot and Mais. Finally, for each crop, we provide the captured RGB images, in the Images folder, and the corresponding ground-truth segmentations, in the Masks folder. 

We provide both the training data, collected in the Training directory, and the test data, which is used by the scoring program to evaluate participants and whose directory changes depending on the current competition stage, which can be Development (1st) or Final (2nd) stage. In particular, we refer to the test set of the Development stage as Test_Dev and to the test set of the Final stage as Test. Corresponding folders in the dataset have the same names.

Test_Dev images are provided from the first stage (Development) without any ground-truth mask. Participants are required to provide the segmentations for the Test_Dev images by submitting the solution with the correct submission format. In the second stage (Final) ground-truth masks will be provided also for the Test_Dev set, while the participants are required to provide the segmentations for the new Test images, whose ground-truth will not be provided. 

To clarify the directories hierarchy and scheduling, we summarize the above information in the following table:

  Development stage Final stage
Training
  • Images
  • Masks
  • Images
  • Masks
Test_Dev
  • Images
  • Images
  • Masks
Test HIDDEN
  • Images

BIPBIP, ROSEAU and WeedElec images are pretty similar, while PEAD team has collected images with a different perspective. PEAD images could be used for training or other accessories tasks such as data augmentation.

Dataset details

Images

Teams' images share most of the properties but differ from image size and file format.

Shared properties:

  • Color space: RGB
  • Classes:
    • Crop
    • Weed
    • Other vegetation*
    • Soil*
  • Number of Training images (per team per crop): 90
  • Number of Test_Dev images (per team per crop): 15
  • Number of Test images (per team per crop): 20

*Attention! Images masks contain the four classes listed above, however, for evaluation purposes, we will consider "Other vegetation" and "Soil" classes as a unique class "Background".

Masks

Masks folders contain the ground-truth segmentation for each corresponding (having the same name) image in the Images folder.  

They have the same exact properties of the Images set apart from the fact that they all have the same File Format: PNG.

In each mask, classes are represented by different colors. The dictionary which allows assigning a label to each corresponding color is provided in the starting_kit, as well as the example script in which we show how to transform RGB masks into target masks. In the following, the provided dictionary ('RGBtoTarget.txt' in the starting kit):

  • RGB: 0 0 0 - Target 0 (background)
  • RGB: 216 124 18 - Target 0 (background)
  • RGB: 255 255 255 - Target 1 (crop)
  • RGB: 216 67 82 - Target 2 (weed)

The starting kit is provided in the form of a tutorial to convert RGB masks into labels and to prepare submissions, included the RLE encoding script. Please execute the scripts contained in the starting kit in the following order:

  1. read_mask_example.py
  2. prepare_submission.py
  3. decode_rle_example.py

Example data used by the script in the starting kit consists of a single example RGB mask ('rgb_mask_example.png').

Competition Rules 

  1. Competition title: "1st ACRE Cascade Competition".
  2. This competition is organized by the ACRE (Agri-food Competition for Robot Evaluation) Organizers (“Competition Organizer”).
  3. This competition is public, but the Competition Organizer approves each user’s request to participate and may elect to disallow participation according to its own considerations. You must register using the ACRE challenge application form.
  4. Submission format: Zipped JSON file containing participant’s predictions.
  5. The competition has two stages. At the end of Stage 1, we will release the labels of the public test set and we will release a new, unseen and unlabeled, private test set. During Stage 2, participants are allowed to re-train their models on the combined training set (including the original training set from Stage 1 and the labeled public test set) and to submit their final result. During Stage 2, the leaderboard will not be updated. At the end of Stage 2 winners will be announced and the final leaderboard will be shown. Moving from Stage 1 to Stage 2, participants will not be required to upload their models.
  6. Users: Each participant must create a CodaLab account to register. Only one account per user is allowed.
  7. If you are entering as a representative of a company, educational institution, or other legal entity, or on behalf of your employer, these rules are binding for you individually and/or for the entity you represent or are an employee of. If you are acting within the scope of your employment as an employee, contractor, or agent of another party, you affirm that such party has full knowledge of your actions and has consented thereof. You further affirm that your actions do not violate your employer’s or entity’s policies and procedures.
  8. Teams: Participants are allowed to form teams. There are no limitations on the number of participants on the team. You may not participate in more than one team. Each team member must be a single individual operating a separate CodaLab account. Team formation requests will not be permitted after the beginning of Stage 2. Participants who would like to form a team should review the ‘Competition Teams’ section on CodaLab’s ‘user_teams’ Wiki page.
  9. Team mergers are allowed and can be performed by the team leader. The organizers don’t provide any assistance regarding team mergers.
  10. External data: You may use data other than the competition data to develop and test your models and submissions. However, any such external data you use for this purpose must be available for use by all other competition participants. Thus, if you use external data, you must make it publicly available and declare it in the competition discussion forum at the same moment you are going to use those data.
  11. Submissions may not use or incorporate information from hand labeling or human prediction of the training dataset or test dataset for the competition’s target labels. Ergo, solutions involving human labeling in the submission file will be disqualified.
  12. The private test set should be used as is, for prediction generation and submissions only. Using the private test set data in order to train the model (“pseudo-labeling” or any other technique that exploits the test data in the training process) is strictly prohibited.
  13. The delivered software code is expected to be capable of generating the winning submission and to operate automatically on new, unseen data without significant loss of performance.
  14. The first three top competitors on the following leaderboard columns must deliver to the Competition Organizer the final model’s software code as used to generate the winning submission: Global IoU, Global IoU Haricot, Global IoU Maize, IoU Bipbip, IoU Pead, IoU Roseau, IoU Weedelec. Other teams could be asked to deliver their final model’s software code for verifications. The delivered software code must be capable of generating the winning submission and contain a description of the resources required to build and/or run the executable code successfully.
  15. Scores on the leaderboard will update after each submission. This means that your best scores will be overwritten also in case of a lower score.
  16. Any submission in Stage 2 (Final) must be provided with a description of the used method by filling the dialog box under the Submit tab.
  17. Competition Duration: 97 days (from 17 October 2020 to 22 January 2021).
  18. Final results and winners will be announced from the 30th of January 2021.

 

Terms and Legal Considerations

  1. This competition is organized by the ACRE (Agri-food Competition for Robot Evaluation) Organizers.
  2. The competition is open worldwide.
  3. The competition is public, but the Competition Organizer may elect to disallow participation according to its own considerations.
  4. The Competition Organizer reserves the right to disqualify any entrant from the competition if, in the Competition Organizer’s sole discretion, it reasonably believes that the entrant has attempted to undermine the legitimate operation of the competition through cheating, deception, or other unfair playing practices.
  5. Submissions are void if they are in whole or part illegible, incomplete, damaged, altered, counterfeit, obtained through fraudulent means, or late. The Competition Organizer reserves the right, in its sole discretion, to disqualify any entrant who makes a submission that does not adhere to all requirements.
  6. The competition winners will be the highest-ranked competitors in the following leaderboard columns: Global IoU, Global IoU Haricot, Global IoU Maize, IoU Bipbip, IoU Pead, IoU Roseau, IoU Weedelec.
  7. By downloading the data you agree to the legal terms contained in their license. The license is provided with the dataset in a file named "LICENSE.txt".
  8. By joining the competition, you affirm and acknowledge that you may not infringe upon any copyrights, intellectual property, or patent of another party for the software you develop in the course of the competition.
  9. The Competition Organizer reserves the right to verify eligibility and to adjudicate on any dispute at any time.
  10. If you wish to use external data, you may do so provided that you declare it in the competition forum and provided that such public sharing does not violate the intellectual property rights of any third party.
  11. Participants grant to the Competition Organizer the right to use their results submissions for any purpose whatsoever and without further approval.
  12. Right to cancel, modify or disqualify. The Competition Organizer reserves the right at its sole discretion to terminate, modify, or suspend the competition.

Q: How do I register for the competition?

AIn order to participate in the competition, you must complete the following steps:

  1. Apply for the competition using the ACRE challenge application form.
  2. Register for the competition with your CodaLab user under the competition Participate tab.

 

Q: How do I form a team?

A: Go at this CodaLab wiki page and follow the instructions for "Competition teams".

 

Q: Why is my submission’s status not changing? 

A: Submission results may take some time, usually a few minutes, and on rare occasions may even take a few hours.

Please try refreshing the page or navigate to Participate tab > Submit/View results > at the table select the wanted result and click refresh status.

 

Q: How many results can I submit during Stage 1?

A: In Stage 1 the limit per day is 2 and the total limit is 150.

 

 

 

 

For every question about the competition, start a new topic in the competition forum

For personal communications to the competition's organizers write to acre.competition1@gmail.com

For inquiries about the ACRE project write to acre@metricsproject.eu

Development

Start: Oct. 17, 2020, midnight

Description: Development phase: create models and submit results on Test_Dev data; leaderboard results are updated.

Final

Start: Jan. 15, 2021, midnight

Description: Final phase: train models on Training and Test data and sumbit results on Test data; leaderboard results are not updated. Leaderboard results from the Development phase will automatically migrate to the Final phase leaderboard.

Results

Start: Jan. 22, 2021, midnight

Description: The results on the Test set will be revealed when the organizers make them available.

Competition Ends

Jan. 22, 2021, midnight

You must be logged in to participate in competitions.

Sign In