Welcome to the VoxCeleb speaker verification challenge! The goal of this challenge is to probe how well current methods can recognize speakers from speech obtained 'in the wild'. The dataset is obtained from YouTube videos of celebrity interviews, consisting of multi-speaker audio from both professionally edited and red carpet interviews as well as more casual conversational multi-speaker audio in which background noise, laughter, and other artefacts are observed in a range of recording environments.
The task of speaker verification is to determine whether two samples of speech are from the same person.
In order to submit a file you need to click on the "Submit/View Results" link under the "Participate" tab. After this, you will be able to see two buttons, corresponding to the Competition Phases. Click on one of the buttons to choose the phase you want to submit to. The available phases are:
In order to prevent overfitting to the test data participants can only submit one result per day. There is also a limit over the total number of submissions for each phase, see more details about this under "Submit/View Results".
This is the competition site for the Open training data condition, where participants can train on the VoxCeleb2 dev set, for which we have already released speaker verification labels, AND/OR any other data that they say fit. See more details on the VoxSRC Challenge website.
Arsha Nagrani, VGG, University of Oxford,
Joon Son Chung, Naver, South Korea,
Andrew Zisserman, VGG, University of Oxford,
Jaesung Huh, VGG, University of Oxford,
Ernesto Coto, VGG, University of Oxford,
Andrew Brown, VGG, University of Oxford,
Weidi Xie, VGG, University of Oxford,
Mitchell McLaren, Speech Technology and Research Laboratory, SRI International, CA,
Douglas A Reynolds, Lincoln Laboratory, MIT.
For more information, visit the VoxSRC Challenge page.
This work is supported by the EPSRC programme grant Seebibyte EP/M013774/1: Visual Search for the Era of Big Data.
Please see the validation scoring code and the VoxSRC Challenge page for details on the evaluation.
Participation in this competition is open to all who are interested and willing to comply with the rules laid out under the "Learn the Details" and "Participate" tabs. There is no cost to participate, although entries to the challenge will only be considered if a technical report is submitted on time. This should not affect later publications of your method if you restrict your report to 2 pages including references. You can still submit to the leaderboard, however, even if you do not submit a technical report.
By submitting to the challenge, you allow the organizers to use your submitted scores for any purpose, including analysis and comparison with other teams.
We also kindly ask you to associate the CodaLab account you will use for the competition to your institutional e-mail. We reserve the right to revoke your access to the competition otherwise.
In case of any issues, all decisions made by the Organizing Committee will be final.
You can download a ZIP file with the test data from here [md5 checksum is 75a563bca76410daf08b3a38354b0b5a]. The text file with pairs can be downloaded from here. The username and password are both voxsrc2020.
The zip file contains 118,439 .wav files. The text file contains the pairs that you are to evaluate for the competition. There are 1,695,248 pairs to be evaluated.
For reference, we have added a baseline result to the leaderboard, submitted by the vggoxford user.
Good Luck!
Start: Aug. 30, 2020, midnight
Description: Submissions for the challenge workshop that will be held in conjunction with Interspeech 2020
Start: Oct. 16, 2020, 11:59 p.m.
Description: Submissions for comparison with previous ones. Not to be taken into account for the challenge workshop
Never
You must be logged in to participate in competitions.
Sign In