MetaDL Challenge

Organized by ebadrian - Current server time: Jan. 18, 2021, 8:36 a.m. UTC

First phase

Sept. 30, 2020, midnight UTC


Competition Ends
Dec. 8, 2020, 8 p.m. UTC

MetaDL Challenge 

You can also visit the dedicated GitHub repository : 

MetaDL repository

Brought to you by ChaLearn and Microsoft

This challenge aims to find meta-learning approaches effective in the few-shot learning setting for image classification tasks. These approaches should be time efficient, that is, meta-learning procedures should not exceed a specific time boundary. More details are available in the Evaluation tab.

The competition is divided into 2 phases: Feedback and Final. In Feedback Phase, the participant can develop their own approach, make submissions and check out the performance on the leaderboard. In Final Phase, the last valid submission during Feedback Phase is blind-tested on an unseen meta-dataset. During an offline Public Phase, the Omniglot dataset (to be downloaded in the Get Data section) is provided to allow participants to familiarize with the challenge API.

Participants need to train a meta-learner on a meta-train set and produce a learner (a machine learning algorithm), which will be used to train on classification tasks generated from the meta-test set and evaluated. A participant's submission will be evaluated by the capacity of this learner to quickly adapt to new unseen tasks. Please refer to the evaluation section for more details.

Instructions and a starting kit are provided in the dedicated GitHub repository.

The top 3 winners will receive the following prize:


1st place

2nd place

3rd place


500 USD

300 USD

200 USD

MetaDL Evaluation

For both Feedback Phase and Public Phase, the performance of a meta-learning algorithm is measured through the evaluation of 600 episodes at meta-test time. The participant needs to implement a MyMetaLearner class that can meta-fit a meta-train set and produce a Learner object, which in turn can fit any support set (a.k.a training set), generated from a meta-test set, and produce a Predictor. The accuracy of these predictors on each query set (or test set) is then averaged to produce a final score. In Feedback Phase, this score is used to form a leaderboard. In Final Phase, this score is used as the criterion for deciding winners (and a leadberboard will also be released). One important aspect of the challenge is that submissions must produce a Learner within 2 hours of compute power. The VM on which your code will be ran is a Azure NV24, which has 4*M60 TESLA GPU and 224 Go of RAM.

Episodes at meta-test time

We use the 5-way 1-shot few-shot learning setting.Support set: 5 classes and 1 example per class (labelled examples)
Query set: 5 classes and a varying number of examples per class (unlabelled examples)

Instructions and a starting kit are provided in the dedicated GitHub repository.

Challenge Rules

  • General Terms: This challenge is governed by the General ChaLearn Contest Rule Terms, the Codalab Terms and Conditions, and the specific rules set forth.
  • Announcements: To receive announcements and be informed of any change in rules, the participants must provide a valid email.
  • Conditions of participation: Participation requires complying with the rules of the challenge. Prize eligibility is restricted by US government export regulations, see the General ChaLearn Contest Rule Terms. The organizers, sponsors, their students, close family members (parents, sibling, spouse or children) and household members, as well as any person having had access to the truth values or to any information about the data or the challenge design giving him (or her) an unfair advantage, are excluded from participation. A disqualified person may submit one or several entries in the challenge and request to have them evaluated, provided that they notify the organizers of their conflict of interest. If a disqualified person submits an entry, this entry will not be part of the final ranking and does not qualify for prizes. The participants should be aware that ChaLearn and the organizers reserve the right to evaluate for scientific purposes any entry made in the challenge, whether or not it qualifies for prizes.
  • Dissemination: The challenge is part of the official selection of the AAAI 2021 conference. Top ranking participants will be invited to submit a paper to a special issue on Meta-learning in the Proceedings of Machine Learning Research (PMLR).
  • Registration: The participants must register to Codalab and provide a valid email address. Teams must register only once and provide a group email, which is forwarded to all team members. Teams or solo participants registering multiple times to gain an advantage in the competition may be disqualified.
  • Anonymity: The participants who do not present their results at the workshop can elect to remain anonymous by using a pseudonym. Their results will be published on the leaderboard under that pseudonym, and their real name will remain confidential. However, the participants must disclose their real identity to the organizers to claim any prize they might win. See our privacy policy for details.
  • Submission method: The results must be submitted through this CodaLab competition site. The number of submissions per day and maximum total computational time are restrained and subject to change, according to the number of participants. Using multiple accounts to increase the number of submissions in NOT permitted. In case of problem, send email to The entries must be formatted as specified on the Instructions page.
  • Reproducibility: The participant should make efforts to guarantee the reproducibility of their method (for example by fixing all random seeds involved). In the Final Phase, all submissions will be run three times, and the worst performance will be used for final ranking.
  • Prizes: The three top ranking participants in the Final phase blind testing may qualify for prizes. The last valid submission in Feedback Phase will be automatically submitted to the Final Phase for final evaluation. The participant must fill out a fact sheet (TBA) briefly describing their methods. There is no other publication requirement. The winners will be required to make their code publicly available under an OSI-approved license such as, for instance, Apache 2.0, MIT or BSD-like license, if they accept their prize, within a week of the deadline for submitting the final results. Entries exceeding the time budget will not qualify for prizes. In case of a tie, the prize will go to the participant who submitted his/her entry first. Non winners or entrants who decline their prize retain all their rights on their entries and are not obliged to publicly release their code.


Start: Sept. 30, 2020, midnight

Description: Feedback phase: create models and submit them or directly submit results on validation and/or test data; feed-back are provided on the validation set only.


Start: Dec. 5, 2020, noon

Description: Final phase: submissions from the previous phase are automatically cloned and used to compute the final score. The results on the test set will be revealed when the organizers make them available.

Competition Ends

Dec. 8, 2020, 8 p.m.

You must be logged in to participate in competitions.

Sign In