It is well known that the success of machine learning methods on visual recognition tasks is highly dependent on access to large labeled datasets. Unfortunately, performance often drops significantly when the model is presented with data from a new deployment domain which it did not see in training, a problem known as dataset shift. The VisDA challenge aims to test domain adaptation methods’ ability to transfer source knowledge and adapt it to novel target domains.
For details and instructions on how to participate, please visit the VisDA challenge website, where you can download the datasets and development kits. This challenge includes two tracks:
Participants are welcome to enter in one or both tracks.
For evaluation metrics and instructions on how to format submissions, please see the challenge ReadMe.
For submission rules and guidelines, please see the website.
The main leaderboard shows results of adapted models and will be used to determine the final team ranks. The expanded leaderboard additionally shows the team's source-only models, i.e. those trained only on the source domain without any adaptation. These results will not be used to determine team ranks; however, please make sure to still submit both sets of results as they are useful for estimating how much a method improves upon its source-only model.
Leaderboard entries from users "visda_[model]" are our baseline results using that model.
Start: June 19, 2017, midnight
Description: Validation Phase. Validation Phase. Please upload two sets of different results for source-only and adaptation models and see the challenge website for submission instructions.
Start: Sept. 8, 2017, midnight
Description: Testing Phase. Validation Phase. Please upload two sets of different results for source-only and adaptation models and see the challenge website for submission instructions.
You must be logged in to participate in competitions.Sign In