VisDA2019: Multi-Source Domain Adaptation Challenge

Organized by pinghu - Current server time: April 1, 2025, 9:27 p.m. UTC

First phase

Training and Validation Data Released
June 12, 2019, midnight UTC

End

Competition Ends
Sept. 28, 2019, 4 p.m. UTC

It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.

For details and instructions on how to participate, please visit the VisDA challenge website, where you can download the datasets and development kits. This challenge includes two tracks:

Participants are welcome to enter in one or both tracks.

For evaluation metrics and instructions on how to format submissions, please see the challenge ReadMe.

You will have an option to make your results private or public after submission. The leaderboard will show your CodaLab username, not your team name. Do not use multiple accounts to submit for one team, and limit the number of submissions to the quota specified in the "Participate" section.

The main leaderboard shows results of adapted models and will be used to determine the final team ranks. The expanded leaderboard additionally shows the team's source-only models, i.e. those trained only on the source domain without any adaptation. These results are useful for estimating how much the method improves upon its source-only model, but will not be used to determine team ranks.

For terms and conditions, please see the challenge website.

Ideally, you should train two identical models with the same strategy/approach/hyperparameter on following settings:
 
Model I: Train on labeled Sketch (#49,115) /Real (#122,563) /Quickdraw (#120,750) /Infograph (#37,087) training images +  unlabeled Clipart training images (#34,019), Test on unlabeled  Clipart testing images (#14,818). 
Model II: Train on labeled Sketch (#49,115) /Real (#122,563) /Quickdraw (#120,750) /Infograph (#37,087) training images +  unlabeled Painting training images (#52,867), Test on unlabeled  Painting testing images (#22,892).
 
The submission file should contain the predictions of model I on 14,818 testing images and the predictions of model II on 22,892 testing images. The final ranking will be determined by how many correct predictions do the submission file have within the (14,818+22,892) images. 
 

 

Training and Validation Data Released

Start: June 12, 2019, midnight

Description: (1) Generate "result.txt". (2) Place the result file into a zip file named [team_name]_submission.

Testing Data Released

Start: Aug. 29, 2019, midnight

Description: (1) Generate "result.txt". (2) Place the result file into a zip file named [team_name]_submission.

Competition Ends

Sept. 28, 2019, 4 p.m.

You must be logged in to participate in competitions.

Sign In