ShallowGlobe

Organized by jinghuang - Current server time: April 5, 2025, 1:37 p.m. UTC

Previous

Development
Jan. 28, 2018, 6:53 p.m. UTC

Current

Final
April 29, 2018, 6:53 p.m. UTC

End

Competition Ends
Never

ShallowGlobe

Brought to you by Fabulous

The well known Iris dataset from Fisher's classic paper (Fisher, 1936).. The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.

References and credits: 
R. A. Fisher. The use of multiple measurements in taxonomic problems. Annual Eugenics, 7, Part II, 179-188 (1936).

ShallowGlobe: Evaluation

The problem is a multiclass classification problem. Each sample (an Iris) is characterized by its sepal and petal width and length (4 features). You must predict the Iris categories: setosa, virginica, or versicolor. 
You are given for training a data matrix X_train of dimension num_training_samples x num_features and an array y_train of labels of dimension num_training_samples. You must train a model which predicts the labels for two test matrices X_valid and X_test. 
To prepare your submission, remember to use predict_proba, which provides a matrix of prediction scores scaled between 0 and 1. The dimension of the matrix is num_pattern x num_classes. Each line represents the probabilities of class membership, which sum up to one. Preparing your submission with the starting kit is the easiest. 

There are 2 phases:

  • Phase 1: development phase. We provide you with labeled training data and unlabeled validation and test data. Make predictions for both datasets. However, you will receive feed-back on your performance on the validation set only. The performance of your LAST submission will be displayed on the leaderboard.
  • Phase 2: final phase. You do not need to do anything. Your last submission of phase 1 will be automatically forwarded. Your performance on the test set will appear on the leaderboard when the organizers finish checking the submissions.

This sample competition allows you to submit either:

  • Only prediction results (no code).
  • A pre-trained prediction model.
  • A prediction model that must be trained and tested.

The submissions are evaluated using the mse_metric metric. This metric computes the balanced accuracy (that is the average of the per class accuracies). The metric is re-scaled linearly between 0 and 1, 0 corresponding to a random guess and 1 to perfect predictions.

Iris: Rules

Submissions must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.

This challenge is governed by the general ChaLearn contest rules.

Development

Start: Jan. 28, 2018, 6:53 p.m.

Description: Development phase: create models and submit them or directly submit results on validation and/or test data; feed-back are provided on the validation set only.

Final

Start: April 29, 2018, 6:53 p.m.

Description: Final phase: The results on the test set will be revealed when the organizers make them available.

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In