The MICCAI 2014 Machine Learning Challenge

Organized by ender.konukoglu - Current server time: March 13, 2025, 3:05 p.m. UTC

First phase

Evaluation
April 14, 2014, midnight UTC

End

Competition Ends
June 14, 2014, midnight UTC

THE MICCAI 2014 MACHINE LEARNING CHALLENGE (MLC)

Predicting Binary and Continuous Phenotypes from Structural Brain MRI Data

Welcome to the home page of the MICCAI 2014 Machine Learning Challenge (MLC). This challenge is organized in conjunction with the 17th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), which will take place at the Massachusetts Institute of Technology, Cambridge, MA, USA. More information can be found on the MCL 2014 workshop website.

OVERVIEW

Machine learning tools have been increasingly applied to structural brain magnetic resonance imaging (MRI) scans, largely for developing models to predict clinical phenotypes at the individual level. Despite significant methodological developments and novel application domains, there has been little effort to conduct benchmark studies with standardized datasets, which researchers can use to validate new tools, and more importantly conduct an objective comparison with state-of-the-art algorithms. The MICCAI 2014 Machine Learning Challenge (MLC) will take a significant step in this direction, where we will employ four separate, carefully compiled, and curated large-scale (each N > 70) structural brain MRI datasets with accompanying clinically relevant phenotypes. Our goal is to provide a snapshot of the current state of the art in the field of neuroimage-based prediction, and attract machine-learning practitioners to the MICCAI community and the field of medical image computing in general. We believe MICCAI 2014 MLC will be a perfect complement to the MICCAI 2014 main conference, the MICCAI 2014 Machine Learning in Medical Imaging Workshop and our sister challenge at MICCAI 2014 CADDementia, which is focused on diagnosing Alzheimer's disease from brain MR scans.

SUBMISSION DETAILS

Binary Classification:

The binary classification results should be gathered in a csv document named: Pred_BinaryTest_sbj_list.csv. This file should adhere to the same structure as the BinaryTest_sbj_list.csv file that is provided as a part of the data for this competition. To provide an example the structure is as follows:

SID,Label
Sbj1,0.600113264532
Sbj2,0.320113372428
Sbj3,0.346193564794
Sbj4,0.213208271081
...

The two labels in the binary classification problem are marked as 1 and 0. For each subject the prediction should be given as the probability of having the label 1: P(label(SbjN) = 1).

Continuous Regression:

The continuous regression results should be gathered in a csv document named: Pred_ContTest_sbj_list.csv. This file should adhere to the same structure as the ContTest_sbj_list.csv file that is provided as a part of the data for this competition. The structure of this file is the same as the binary classification case.

For each subject the prediction should be given as a real number in float or double precision.

Estimated Accuracies:

An important part of machine learning is the estimation of generalization accuracy. In this competition we would like to gather information on how well generalization accuracy can be estimated from a given training dataset. We strongly encourage participants to submit their estimates of generalization accuracy computed via cross-validation on the training dataset. We would like each participant to submit their estimation of generalization accuracy using 5 fold cross validation. These cross-validation results should be gathered in csv files named: Pred_Binary_Estimated_Accuracies.csv for binary classification and Pred_Cont_Estimated_Accuracies.csv for continuous regression. The structures of these files are as follows:

Contents of Pred_Binary_Estimated_Accuracies.csv are:

Accuracy Estimate, AUC estimate

for instance

0.60, 0.70

Likewise the contents of Pred_Cont_Estimated_Accuracies.csv are:

RMSE estimate, Pearson Correlation Coefficient Estimate

Additionally, participants are free to submit generalization accuracy estimations using their favourite method, e.g. leave-N-out combined with bootstrapping, jack-knifing, ... These submissions should be gathered in files named: Pred_Binary_AdditionalEstimates.csv and Pred_Cont_AdditionalEstimates.csv. These files should contain a brief description of the method followed by the estimated values. The structures are as follows:

Contents of Pred_Binary_AdditionalEstimates.csv are:

#Description: Leave-one-out with Bootstrapping:
0.61, 0.70
#Description: Leave-2-out with Jackknifing:
0.62, 0.69

Algorithm Description:

Finally, please write up a brief description of your method to include in your submission. This should be at most one page long text (it could be as short as a single paragraph) that includes all relevant information (and references) for someone else to replicate your analysis. If you use a novel method, please provide a reference to this method. We also recommend that you share a pointer to the executables of the method(s) you used. Save this file as Description.doc (MS Word) or Description.tex (LaTex).

Submission File:

The submission file should be a zip file with the following structure:

submission.zip
|- Pred_BinaryTest_sbj_list.csv
|- Pred_ContTest_sbj_list.csv
|- Pred_Binary_Estimated_Accuracies.csv
|- Pred_Cont_Estimated_Accuracies.csv
|- Pred_Binary_AdditionalEstimates.csv
|- Pred_Cont_AdditionalEstimates.csv
|- Description.tex

Note that the zip file contains the csv files NOT a folder that contains a csv files.

VALIDATION DETAILS

For binary classification, we will use ACC (accuracy) and AUC (area under the curve) metrics.

ACC is defined based on the binarized predictions. To do this we will use 0.5 as the threshold on P(label(SbjN) = 1), i.e.
P(label(SbjN) = 1) > 0.5 -> prediction(SbjN) = 1
P(label(SbjN) = 1) < 0.5 -> prediction(SbjN) = 0
P(label(SbjN) = 1) = 0.5 -> prediction(SbjN) = fair coin toss
ACC is equal to the number of cases where the binarized prediction is the same as the ground truth label, divided by the total number of cases.

AUC is the area under the receiver operation characteristic curve (ROC curve). For further details, please refer to the references here.

For continuous regression, we will compute RMSE and Pearson's R.
Both RMSE and Pearson's R will be computed between the continuous predictions and ground truth label values.

We will compute confidence intervals for each evaluation measure with 10,000 bootstrap (with replacement) samples.

For the evaluation computations we will use the scikit-learn package in python. For participants who would like to use Matlab instead we can recommend the perfcurve function to compute the AUC values.

More information about the workshop can be found on the MCL 2014 workshop website.

CHALLENGE TERMS AND CONDITIONS

All the data made available for the MICCAI 2014 Machine Learning Challenge (MLC) can only be used to generate a submission for this challenge.

Results submitted to MICCAI 2014 MLC, can be published (as seen appropriate by the organizers) through different media including this website and journal publications.

By submitting an entry to MICCAI 2014 MLC, each team agrees to have at least a single member register to the accompanying workshop (held on September 18, 2014 at MIT).

More information can be found on the MCL 2014 workshop website.

Evaluation

Start: April 14, 2014, midnight

Competition Ends

June 14, 2014, midnight

You must be logged in to participate in competitions.

Sign In