|This challenge mimics the NIPS 2003 Feature Selection Challenge: it allows you to make post-challenge submissions on test data to benchmark new methods. The challenge ended on December 12, 2003.|
Go here for more details (an archive of the original website):
The aim of the challenge in feature selection is to find feature selection algorithms that significantly outperform methods using all features, on ALL five benchmark datasets. To facilitate entering results for all five datasets, all tasks are two-class classification problems. You can download the datasets in the Participate section.
|Dataset||Size||Type||Features||Training Examples||Validation Examples||Test Examples|
|Dexter||0.9 MB||Sparse integer||20000||300||300||2000|
|Dorothea||4.7 MB||Sparse binary||100000||800||350||800|
The score is an average of AUC scores of each of the datasets.
You can submit just one file .predict instead of .resu and .conf. Each .predict file should contain in each line the respective confidence that the label is positive.
The BER metric will be also calculated, tresholding at 0.
The submission should consist of 5 .predict files (e.g.: arcene_test.predict) in a zip archive, without extra directories. You can optionally include .feat files. This sample submission contains random results.
Submission in a format of the original contest should consist of 3 x 5 files in a zip archive:
See the sample submission in the original format.
The AUC scores are on the leaderboard, you can view the BER scores and feature statistics of your submission by clicking: 'View scoring output log'
Start: Feb. 1, 2015, midnight
You must be logged in to participate in competitions.Sign In