Automated Deep Learning Self-Service

Organized by michavaccaro - Current server time: Jan. 25, 2021, 9:49 a.m. UTC

Current

DeepWisdom
Dec. 1, 2020, 8 p.m. UTC

End

Competition Ends
Never

Find a deep learning model automatically for your task!

Automated Deep Learning Self-Service

The AutoDL challenge series, in which participants had to produce Automated Deep Learning solutions which were evaluated on datasets covering a wide range of domains (image, video, speech, text and tabular) for multilabel (or multiclass) classification tasks, delivered its final conclusions in March 2020. The overall winning solution of the final challenge, DeepWisdom, is made available to the public on this Codalab competition.

Indeed, unlike most other Codalab competitions where you are asked to submit a code to solve datasets, here you can submit a dataset on which will be run the winning AutoDL solution. This gives you the opportunity to evaluate the model produced and trained by DeepWisdom on your task and even the possibility to make predictions on unseen data. As in the AutoDL competition, a submission is processed in 20min maximum.

The format in which you have to provide your data is the same as the one used for the AutoDL challenges series, a format based on TFRecords, used by TensorFlow.. See the 'Format my submission' page (in the left-side menu) for all information on how to format your submission correctly. The 'Making Predictions' page shows you how to modify your submission if you want to use this service to predict on unseen data.

You can find pre-formatted datasets in the starting kit, which you can already submit to get accustomed to the interface.

Finally, some information about how the score is calculated is also provided in the "About the score" section.

Starting Kit

We provide one dataset of each domain: Tabular, Image, Video, Time Series and Text.

Try to upload one now!

 Mini Starting Kit (1 dataset) 

 [Download  128 MB
 Starting Kit (5 datasets)  [Download]  2.7 GB
 DeepWisdom's code  [Link]  (Github) 
 Format your own dataset  [Link]  (Github) 

How to format my submission ?

Convert your own data in the AutoDL format!

You can find pre-formatted datasets in the starting kit, which you can already submit to get accustomed to the interface.

The dataset must be formatted in the same uniform format that was used in the AutoDL competition (details below). Your submission .zip folder must look like:

dataset.zip/
|--dataset.data/
|----train/
|------metadata.textproto
|------dataset-train.tfrecord
|----test/
|------metadata.textproto
|------dataset-test.tfrecord
|--dataset.solution
|--metadata

Be careful to zip the files directly into a single archive, not to zip a folder. The metadata file is necessary, but may be empty. The data is formatted in a generic data format based on TFRecords, used by TensorFlow.  It is divided in two parts: the train data is used to select and train the model, while the test data is then used to evaluate it by comparing the predictions of the model to the labels. The *.solution file must contain the one-hot encoded labels of the test examples, that will be used to compute the score of the model produced by Deep Wisdom on your task.

You can format your own dataset in the AutoDL data format with these tools. The script `check_n_format.py` formats your dataset into the right submission structure. All you have to do is to follow the README (describing the environment and the input formats supported), run the script and zip the resulting AutoDL formatted dataset!

If you have unlabelled data that you would like to provide in order to make predictions on it (after training and testing), see the page 'Making predictions'. Your submission does not actually need to be filled with unlabelled data, but you have the possibility to supply it if you need to make predictions.

Making predictions

If you want to make predictions on unlabelled data, you can add an "unlabelled" directory to your submission alongside the "train" and "test" directories (who will still be used to respectively train the model and calculate a score). You can use the same tools as above to convert your raw unlabelled data in the TFRecord format (you will need to run the `format_unseen.py` script, see the dedicated README section). Your submission will thus look like

dataset.zip/
|--dataset.data/
|----train/
|------metadata.textproto
|------dataset-train.tfrecord
|----test/
|------metadata.textproto
|------dataset-test.tfrecord
|----unlabelled/
|------metadata.textproto
|------dataset-unlabelled.tfrecord
|--dataset.solution
|--metadata

You will then find a directory "labelled" in the prediction output of your submission, containing a file dataset.predict, listing class probabilities for each sample.

Metrics

The scoring system used for "rating" the model produced on your submission is the same that was used during the AutoDL challenge. We provide some details about how this score is obtained.

Deep Wisdom, as the participants to the AutoDL challenge were encouraged to, trains in batches to incrementally improve its performance. In this way we can plot learning curves: "performance" as a function of time. Each time the "train" method terminates, the "test" method is called and the results are saved, so the scoring program can use them, together with their timestamp.

We treat both multi-class and multi-label problems alike. Each label/class is considered a separate binary classification problem, and we compute the normalized AUC (or Gini coefficient)

    2 * AUC - 1

as score for each prediction, here AUC is the usual area under ROC curve (ROC AUC).

For each dataset, we compute the Area under Learning Curve (ALC). The learning curve is drawn as follows:

  • at each timestamp t, we compute s(t), the normalized AUC (see above) of the most recent prediction. In this way, s(t) is a step function w.r.t time t;
  • in order to normalize time to the [0, 1] interval, we perform a time transformation by

    where T is the time budget (of default value 1200 seconds = 20 minutes) and t0 is a reference time amount (of default value 60 seconds).
  • then compute the area under learning curve using the formula

    we see that s(t) is weighted by 1/(t + t0)), giving a stronger importance to predictions made at the beginning of th learning curve.

Thus, the score you get for your submission in the "leaderboard" is the ALC mentionned above. You can also visualize the learning curve in the "detailed results" of your submission.

Examples of learning curves:

Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Intellectual property: The provider of this service won't re-use the datasets uploaded for any other purpose. The user keeps all his property rights on his data.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Registration: The participants must register to Codalab and provide a valid email address. Solo participants registering multiple times to increase the number of submissions is NOT permitted.
  • Anonymity: The participants can remain anonymous by using a pseudonym. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The number of submissions per day and maximum total computational time are restrained and subject to change, according to the number of participants. Using multiple accounts to increase the number of submissions is NOT permitted. The entries must be formatted as specified on the Instructions page.

Credits

All the credit for the model behind this service goes to the Deep Wisdom team, whose AutoDL's winning code is being reused.

Largely based on all the previous work done on the AutoDL competition, this service would also not have been possible without the help of many people.

Main organizers of the AutoDL challenge:

  • Olivier Bousquet (Google, Switzerland)
  • André Elisseef (Google, Switzerland)
  • Isabelle Guyon (U. Paris-Saclay; UPSud/INRIA, France and ChaLearn, USA)
  • Zhengying Liu (U. Paris-Saclay; UPSud, France)

Other contributors to the organization, starting kit, and datasets, include:

  • Stephane Ayache (AMU, France)
  • Hubert Jacob Banville (INRIA, France)
  • Mahsa Behzadi (Google, Switzerland)
  • Kristin Bennett (RPI, New York, USA)
  • Hugo Jair Escalante (IANOE, Mexico and ChaLearn, USA)
  • Sergio Escalera (U. Barcelona, Spain and ChaLearn, USA)
  • Gavin Cawley (U. East Anglia, UK)
  • Baiyu Chen (UC Berkeley, USA)
  • Albert Clapes i Sintes (U. Barcelona, Spain)
  • Bram van Ginneken (Radboud U. Nijmegen, The Netherlands)
  • Alexandre Gramfort (U. Paris-Saclay; INRIA, France)
  • Yi-Qi Hu (4paradigm, China)
  • Julio Jacques Jr. (U. Oberta de Catalunya, Spain)
  • Meysam Madani (U. Barcelona, Spain)
  • Tatiana Merkulova (Google, Switzerland)
  • Adrien Pavao (U. Paris-Saclay; INRIA, France and ChaLearn, USA)
  • Shangeth Rajaa (BITS Pilani, India)
  • Herilalaina Rakotoarison (U. Paris-Saclay, INRIA, France)
  • Lukasz Romaszko (The University of Edinburgh, UK)
  • Mehreen Saeed (FAST Nat. U. Lahore, Pakistan)
  • Marc Schoenauer (U. Paris-Saclay, INRIA, France)
  • Michele Sebag (U. Paris-Saclay; CNRS, France)
  • Danny Silver (Acadia University, Canada)
  • Lisheng Sun (U. Paris-Saclay; UPSud, France)
  • Sebastien Treger (La Pallaisse, France)
  • Wei-Wei Tu (4paradigm, China)
  • Fengfu Li (4paradigm, China)
  • Lichuan Xiang (4paradigm, China)
  • Jun Wan (Chinese Academy of Sciences, China)
  • Mengshuo Wang (4paradigm, China)
  • Jingsong Wang (4paradigm, China)
  • Ju Xu (4paradigm, China)
  • Zhen Xu (Ecole Polytechnique and U. Paris-Saclay; INRIA, France)
  • Michael Vaccaro (U. Paris-Saclay; INRIA, France)

The service is running on the Codalab platform, administered by Université Paris-Saclay and maintained by CKCollab LLC, with primary developers:

  • Eric Carmichael (CKCollab, USA)
  • Tyler Thomas (CKCollab, USA)

ChaLearn was the challenge organization coordinator. Google was the primary sponsor of the challenge. 4Paradigm donated prizes. Other institutions of the co-organizers provided in-kind contributions.

DeepWisdom

Start: Dec. 1, 2020, 8 p.m.

Description: Submit datasets to Deep Wisdom

Competition Ends

Never

You must be logged in to participate in competitions.

Sign In