[OLD] Iris example challenge

Organized by guyon - Current server time: April 5, 2025, 2:22 p.m. UTC

Previous

Development Phase
Oct. 31, 2016, midnight UTC

Current

Final Phase
April 30, 2017, 11:59 p.m. UTC

End

Competition Ends
Never

Fisher's Famous Iris Problem

This is an OLD SAMPLE COMPETITION from Codalab v1. You may DOWNLOAD THE BUNDLE OF THE CHALLENGE and use it as template.

This is the well known Iris dataset from Fisher's classic paper (Fisher, 1936). The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other.

References and credits:
R. A. Fisher. The use of multiple measurements in taxonomic problems. Annual Eugenics, 7, Part II, 179-188 (1936).
The competition protocol was designed by Isabelle Guyon.
The starting kit was adapted from an Jupyter notebook designed by Balazs Kegl for the RAMP platform.
This challenge was generated using Chalab, a competition wizard designed by Laurent Senta.

Evaluation

The problem is a multiclass classification problem. Each sample (an Iris) is characterized by its sepal and petal width and length (4 features). You must predict the Iris categories: setosa, virginica, or versicolor.
You are given for training a data matrix X_train of dimension num_training_samples x num_features and an array y_train of labels of dimension num_training_samples. You must train a model which predicts the labels for two test matrices X_valid and X_test.
To prepare your submission, remember to use predict_proba, which provides a matrix of prediction scores scaled between 0 and 1. The dimension of the matrix is num_pattern x num_classes. Each line represents the probabilities of class membership, which sum up to one. Preparing your submission with the starting kit is the easiest.
Starting Kit
There are 2 phases:

  • Phase 1: development phase. We provide you with labeled training data and unlabeled validation and test data. Make predictions for both datasets. However, you will receive feed-back on your performance on the validation set only. The performance of your LAST submission will be displayed on the leaderboard.
  • Phase 2: final phase. You do not need to do anything. Your last submission of phase 1 will be automatically forwarded. Your performance on the test set will appear on the leaderboard when the organizers finish checking the submissions.

This sample competition allows you to submit either:

  • Only prediction results (no code).
  • A pre-trained prediction model.
  • A prediction model that must be trained and tested.

The submissions are evaluated using the bac_multiclass metric. This metric computes the balanced accuracy (that is the average of the per class accuracies). The metric is re-scaled linearly between 0 and 1, 0 corresponding to a random guess and 1 to perfect predictions.

Rules

Submissions must be made before the end of phase 1. You may submit 5 submissions every day and 100 in total.

This challenge is governed by the general ChaLearn contest rules.

Development Phase

Start: Oct. 31, 2016, midnight

Description: Development phase: tune your models and submit prediction results, trained model, or untrained model.

Final Phase

Start: April 30, 2017, 11:59 p.m.

Description: Final phase (no submission, your last submission from the previous phase is automatically forwarded).

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In
# Username Score
1 shahram 1.0000
2 Vhee 1.0000
3 Btim 1.0000