The aim of the OpenKBP Challenge is to advance fair and consistent comparisons of dose prediction methods for knowledge-based planning (KBP). Participants of the challenge will use a large dataset to train, test, and compare their prediction methods, using a set of standardized metrics, with those of other participants.
KBP research is flourishing, but many results are reported using institution-specific datasets and evaluation metrics. While many competing approaches have reported positive results, comparing them is difficult without a large open-source dataset, standardized metrics, and a platform to encourage collaboration, sharing, and benchmarking. Such an open and standardized approach is a staple in thriving machine learning-driven fields.
Competitors will predict a clinical quality dose distribution for a patient given only a contoured CT image. The challenge is divided into two streams:
Anyone is welcome to participate in the challenge. Participants can choose to compete in either or both streams of the challenge.
One member from the top performing team from each stream will receive complimentary registration to the 2020 Joint AAPM/COMP Meeting, and will also receive one speaking slot in a Grand Challenge symposium at AAPM/COMP to present their methods and results. If the same team wins both streams, then a second place team from one of the streams will be invited.
The OpenKBP Grand Challenge is proud to have a dynamic and diverse community of organizers. We are committed to equity, diversity and inclusion in our policies and procedures. Attracting registrations from people of all backgrounds, we will leverage all forms of diversity, promote inclusivity and create opportunities for all participants to experience the benefits of working collaboratively across cultures.
We recognize that in today’s world, women, LGBTQ individuals, racialized persons and persons of colour, persons with disabilities and Indigenous People are some of those who are under-represented in our STEM field. Our competition aims to build a community that reflects the society we live in. Equity initiatives therefore are an important component as they help us identify and eliminate barriers that may exist, and ensure that everyone, particularly those who are currently under-represented, have an equitable opportunity to participate in this event, develop their abilities, contribute and benefit from different perspectives.
Thank you for your shared commitment to the highest quality competition.
This page will guide you through the set up process for this competition. We hope the instructions and provided code will help you get started with the competition in less than 30 minutes.
Please register for the competition using this Google Form. Once the form is submitted go to the Particpate tab of this competition, and click the Register button to complete the registration. We will accept any participant who fills out the form.
To help get you started, all competition data is cleaned and formatted consistently to facilate the development and testing of your dose prediction models. A small code repository on GitHub is also available to help with data loading. There is also a simple U-NET to serve as an example of how a neurel network can be used to tackle this problem.
The details of the provided data and evaluation metrics are provided in this PDF. All data for this competition can be downloaded directly from CodaLab under the Files section of the Participate tab. Please note that dose (i.e., feature being predicted) is only provided for the training set data. Dose is intentially held back from the provided validation and testing sets to ensure a fair competition.
Partipants may code in any language (e.g., Python, MATLAB). However, the provided repositary is written in Python. The repository includes instructions for setting up either a local machine or Google Colab Notebook (a powerful and free resource). More information is provided in the repository README. Please post any bugs you find to Discussion Forums.
February 21, 2020 (12:01am PT) - Training and validation datasets available.
May 22, 2020 (12:01am PT) - Testing dataset available.
May 31, 2020 (11:59pm PT) - Final submission deadline for test dataset predictions.
June 5, 2020 - Winners notified.
July 2020 - Results are presented at the Joint AAPM/COMP meeting.
By submitting results to this competition, you agree for any validation phase scores to be made public. Scores may be invalidated if the organizers judge that the submission was incomplete, errornous or violated the spirit of the competition. You further agree for your model to be named with the name provided with the submission, or a suitable shorthand.
By downloading the data associated with the competition, you agree to use the data only for academic-related research.
Aaron Babier, Binghao Zhang, Rafid Mahmood, Timothy Chan - University of Toronto, Toronto, Canada
Andrea McNiven, Tom Purdie - Princess Margaret Cancer Center, Toronto, Canada
Kevin Moore - University of California San Diego, San Diego, U.S.A
Please post any questions or inquiries to the competition forum. Alternatively, you can contact the organizers at openkbp@gmail.com.
The OpenKBP Challenge attracted 195 participants from 28 counties. The competition started February 21, 2020 and concluded on June 1, 2020. A total of 1750 submissions were made to the validation phase by the 44 teams (consisting of 73 people) who made at least 1 submission. In the testing phase, 28 teams (consisting of 54 people) made submissions. The top teams in this competition are highlighted below.
Dose and DVH Stream: Fuxin Ji, Dashan Jiang, Qi Wu, and Shuolin Liu, LSL AnHui University, Anhui University, China.
Dose Stream: Carlos Cardenas, Skylar Gay, Mary Gronberg, Tucker Netherton, and Dong Joo Rhee, SuperPod, MD Anderson Cancer Center, United States.
DVH Stream: Erik Faustmann, Lukas Fetty, Gerd Heilemann, and Christian Ramsl, PTV - Prediction Team Vienna, Medical University of Vienna, Austria.
This leaderboard contains the final results of this challenge, which is the first controlled and blinded test of KBP method implementations from several institutions. Submissions to this leaderboard can still be made on CodaLab, however, since the results are no longer blinded there is no way to ensure the test set was used as intended (i.e., without any peaking).
Start: Feb. 21, 2020, 7:01 a.m.
Description: During the training and validation phase, a dataset of 200 contoured CT images and their corresponding dose distributions will be made available to all contestants. Contestants will use this data to train their models. A separate validation set of 40 contoured CT images is also provided for contestants to validate the effectiveness of their models. Predictions made on the validation set can be submitted to the CodaLab platform. All submission will be scored according to the competition evaluation metrics, and those corresponding scores will be used to populate a public leaderboard. This gives contestants the chance to compare results with others. These validation predictions will not be used to determine the winners of the challenge.
Start: June 1, 2020, 7 a.m.
Description: The testing phase of the competition. During the testing phase, a dataset of 100 new contoured CT images will be made available. Contestants will use their final models, which should have been tuned in the validation phase, to make predictions on this new unseen dataset. These final predictions must be submitted to CodaLab. The results of this evaluation will determine the winners, and the testing phase leaderboard will be revealed a week after the testing phase concludes. Once your submission, please complete our model survey (https://forms.gle/Pb8WhMpKqZWf1a8F9) to summarize your model. We will consider your submission complete only if this survey is submitted. Any submission made on CodaLab that is not associated with a survey response will be considered void, and it will not be ranked in the final leaderboard.
June 1, 2020, 6:59 a.m.
You must be logged in to participate in competitions.
Sign In