AutoML 2018 challenge :: PAKDD2018

Organized by hugo.jair - Current server time: March 29, 2025, 11:21 a.m. UTC
Reward $3,000

First phase

Feedback
Nov. 30, 2017, midnight UTC

End

Competition Ends
March 31, 2020, midnight UTC

 AutoML 2018

 

Fully Automatic Machine Learning
without ANY human intervention


Machine learning has achieved great success in online advertising, recommender system, financial market analysis, computer vision, linguistics, bioinformatics and many other fields, but these achievements crucially depend on human machine-learning experts. In almost all of these successful machine learning applications, human experts are involved in all machine learning stages including: transforming real world problems into machine learning tasks, collecting data, doing feature engineering, selecting or designing the model architecture, tuning model’s hyper-parameters, evaluating model’s performance, deploying the machine learning system in online systems and so on. As the complexity of these tasks is often beyond non-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML (Automatic Machine Learning). In this challenge you are asked to provide code for solving real world classification problems without any human intervention. During the feed-back phase you can submit your code, that will be evaluated on public data sets, you will receive immediate feedback on the performance of your method. Since the final goal of the challenge is to perform AutoML, your last code submission in the feedback phase will be used with five other private data sets. The performance in these latter data sets will be considered for ranking participants.  

There is also a phase in which you can submit predictions, although the goal of the challenge is on AutoML.

4paradigm

 

 

 

 

This challenge is brought to you by 4Paradigm and ChaLearn. Contact the organizers.


Evaluation

Tasks

The goal of this challenge is to expose the research community to real world datasets of interest to 4Paradigm. All datasets are formatted in a uniform way, though the type of data might differ. The data are provided as preprocessed matrices, so that participants can focus on classification, although participants are welcome to use additional feature extraction procedures (as long as they do not violate any rule of the challenge). All problems are binary classification problems and are assessed with the normalized Area Under the ROC Curve (AUC) metric (i.e. 2*AUC-1).


The identity of the datasets and the type of data is concealed, though its structure is revealed. The final score in  phase 2 will be the average of rankings  on all testing datasets, a ranking will be generated from such results, and winners will be determined according to such ranking.


The tasks are constrained by a time budget. The Codalab platform provides computational resources shared by all participants. Each code submission will be exceuted in a compute worker with the following characteristics: 2Cores / 8G Memory / 40G SSD with Ubuntu OS. To ensure the fairness of the evaluation, when a code submission is evaluated, its execution time is limited in time. 

Phases

The challenge has two phases:

  • Phase 1: Feedback phase. You can practice on 5 datasets that are of similar nature as the datasets of the second phase. You can make a limited number of submissions, you can download the labeled training data and the unlabeled test set. So you can prepare your code submission at home and submit it later. Your LAST submission must be a CODE SUBMISSION, because it will be forwarded to the next phase for final testing.
  • Phase 2: AutoML challenge phase. Your last submission of the previous phase is blind tested on five new datasets. Your code will be trained and tested automatically, without human intervention.

During the feedback phase, the results of your last submission on test data are shown on the leaderboard. Prizes will be awarded in Phase 2 only.

 Prizes

Prizes sponsored by 4paradigm will be granted to top ranking participants, provided the comply with the rules of the challenge (see the terms and conditions, section). The distribution of prizes will be as follows.

  • First place: USD 3000 + Certificate + 600 USD in travel grant
  • Second place: USD 1500 + Certificate+ 600 USD in travel grant
  • Third place: USD 750 + Certificate+ 600 USD in travel grant


* A fraction of the prize amount might be used as travel grant to attend the conference and workshop.

4paradigm

 

 

 

 

This challenge is brought to you by 4Paradigm and ChaLearn. Contact the organizers.

 

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Prize eligibility is restricted by Chinese government export regulations. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the challenge design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several entries in the challenge and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that ChaLearn and the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.
  • Dissemination: Top ranked participants will be invited to attend a workshop collocated with PAKDD 2018 to describe their methods and findings. Winners of prizes are expected to attend. The challenge is part of the competition program of the PAKDD2018 conference. Organizers are making arrangements for the possible publication of a book chapter or article written jointly by organizers and the participants with the best solutions.
  • Registration: The participants must register to Codalab and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified.
  • Anonymity: The participants who do not present their results at the workshop can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to claim any prize they might win. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The participants can make up to 3 submissions per day in the Tweakathon phase. Using multiple accounts to increase the number of submissions in NOT permitted. There are NO submissions in the Final and AutoML phases (the submissions from the previous Tweakathon phase migrate automatically). In case of problem, send email to automl2018@gmail.com. The entries must be formatted as specified on the "Participate>Get data" page.

4paradigm

 

 

 

 

This challenge is brought to you by 4Paradigm and ChaLearn. Contact the organizers.

 

 

Feedback

Start: Nov. 30, 2017, midnight

Description: Practice on five datasets similar to those of the AutoML phase. You can make multiple submissions of code. The results on test data are shown on the leaderboard.

AutoML blind test

Start: March 12, 2018, 11:59 p.m.

Description: Your last CODE submission of the first phase will be blindly tested on new datasets. No new submission is made in this phase.

Competition Ends

March 31, 2020, midnight

You must be logged in to participate in competitions.

Sign In