The sixth phase has begun. Note this is the final detection phase! The valid submission slot of this phase is from now to Apr. 19, 00:00 (UTC) or Apr. 19, 8:00 (Beijing Time). You need to submit at least one time to this phase, in order to be considered for winning prizes in the detection track. Please choose only one submission to be shown on the leaderboard (LB), and this submission on LB when this phase ends is your final entry to be considered for awards. Note the final top teams need to send their training codes to the organizers to check for exact reproducibility and to exclude violations of rules. Please make sure both your submitted inference code and your training code do not have randomness and are exactly reproducible. Also, please team up if you work as a team.
The submissions in this phase are evaluated against 21 created datasets from the last phase. This phase has a running time limit of 600 seconds for each submission. We also updated our input json file with 68 facial landmarks, apart from the initial bounding box and 5-landmark information. You may additionally use this information in your detection method. Please see the “Updates” in the starter-kit github project (https://github.com/bomb2peng/DFGC_starterkit ).
Before submission, it is recommended that you first test your codes using our starter-kit and using the specified docker environment.
To submit, when you click the “Submit” button, you may need to wait for seconds or minutes before the submission is fully uploaded, depending on your network speed and submission size. After submission, the status “submitted” indicate that your submission is completed and waiting for evaluation, and the status “running” indicates that your submission is being evaluated on our server. If everything goes fine, your best result will be shown on the Leaderboard (if not, please manually submit your best score to LB). Also note that the “detailed results” in the LB is not working for now due to some bug of Codalab platform. However, you can still view your own detailed results by “Download output from scoring step” in your submission transaction and see the “scores.txt”. Also, please team up if you work as a team.
Our baseline detection method (a Xception model) has the following detailed results:
Reminder on the rules: Do NOT use CelebDF-v2 test set in your training or development, and do NOT use extra data other than CelebDF-v2 train set during training of detection phases. Do NOT use filenames, metadata, or facial ID information for detection. Follow the rules listed in “Terms and Conditions”.
It is OK to create new data using data resources that is ONLY from CelebDF-v2 train set to augment your training data. In this case, you also need to provide this augment train set to the organizers to check reproducibility if you ranked top places.
Hello, I have one question about "randomness ".
In fact, my training code, both of sampling strategy and the special data augmentation have some randomness. But as the number of training iterations increases, my LB score is exactly reproducible. Also my submitted inference code do not have randomness.
In my method, the adversarial samples are also randomly selected and generated, which makes it almost impossible for me to continue playing in this challenge. But I think my LB score is exactly reproducible.
So is that against the rules?
Hi, Thanks for the question.
Is it possible to fix the random seed in your training process and get exactly the same results in each run?
For training, if fixing the random seed does not solve the whole randomness problem, we will run the code multiple times to see if the LB score is within the variation zone.
Make sure the submitted inference code should get the same results in each run.
Thanks for your answer!Posted by: chenhan. @ April 12, 2021, 9:17 a.m.