It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.
For details and instructions on how to participate, please visit the VisDA challenge website, where you can download the datasets and development kits. This challenge includes two tracks:
Participants are welcome to enter in one or both tracks.
For evaluation metrics and instructions on how to format submissions, please see the challenge ReadMe.
You will have an option to make your results private or public after submission. The leaderboard will show your CodaLab username, not your team name. Do not use multiple accounts to submit for one team, and limit the number of submissions to the quota specified in the "Participate" section.
The main leaderboard shows results of adapted models and will be used to determine the final team ranks. The expanded leaderboard additionally shows the team's source-only models, i.e. those trained only on the source domain without any adaptation. These results are useful for estimating how much the method improves upon its source-only model, but will not be used to determine team ranks.
For terms and conditions, please see the challenge website.
Start: June 12, 2019, midnight
Description: (1) Generate "result.txt". (2) Place the result file into a zip file named [team_name]_submission.
Start: Aug. 29, 2019, midnight
Description: (1) Generate "result.txt". (2) Place the result file into a zip file named [team_name]_submission.
Sept. 28, 2019, 4 p.m.
You must be logged in to participate in competitions.
Sign In