DeepFake Game Competition (DFGC) @ IJCB 2021 Forum

Go back to competition Back to thread list Post in this thread

> The Sixth Phase (Final Detection Phase) of DFGC Has Begun

The sixth phase has begun. Note this is the final detection phase! The valid submission slot of this phase is from now to Apr. 19, 00:00 (UTC) or Apr. 19, 8:00 (Beijing Time). You need to submit at least one time to this phase, in order to be considered for winning prizes in the detection track. Please choose only one submission to be shown on the leaderboard (LB), and this submission on LB when this phase ends is your final entry to be considered for awards. Note the final top teams need to send their training codes to the organizers to check for exact reproducibility and to exclude violations of rules. Please make sure both your submitted inference code and your training code do not have randomness and are exactly reproducible. Also, please team up if you work as a team.
The submissions in this phase are evaluated against 21 created datasets from the last phase. This phase has a running time limit of 600 seconds for each submission. We also updated our input json file with 68 facial landmarks, apart from the initial bounding box and 5-landmark information. You may additionally use this information in your detection method. Please see the “Updates” in the starter-kit github project (https://github.com/bomb2peng/DFGC_starterkit ).
Before submission, it is recommended that you first test your codes using our starter-kit and using the specified docker environment.
To submit, when you click the “Submit” button, you may need to wait for seconds or minutes before the submission is fully uploaded, depending on your network speed and submission size. After submission, the status “submitted” indicate that your submission is completed and waiting for evaluation, and the status “running” indicates that your submission is being evaluated on our server. If everything goes fine, your best result will be shown on the Leaderboard (if not, please manually submit your best score to LB). Also note that the “detailed results” in the LB is not working for now due to some bug of Codalab platform. However, you can still view your own detailed results by “Download output from scoring step” in your submission transaction and see the “scores.txt”. Also, please team up if you work as a team.
Our baseline detection method (a Xception model) has the following detailed results:
Score:0.425283
ExecutionTime:173
Det_score_joshhu:0.061908
Det_score_ctmiu:0.086385
Det_score_nbhh:0.164225
Det_score_combaz:0.008906
Det_score_seanseattle:0.109376
Det_score_jerryHUST:0.309827
Det_score_zhaobh:0.161691
Det_score_DFGCSYSU:0.166870
Det_score_lowtec:0.560525
Det_score_zz110:0.393588
Det_score_yangquanwei:0.374619
Det_score_nodo:0.680770
Det_score_wuyuhong:0.346652
Det_score_yuejiang:0.620199
Det_score_yZzzzzz:0.812093
Det_score_smartz:0.712206
Det_score_baseline:0.995284
Det_score_DFischerHDA:0.859851
Det_score_ganjua:0.265431
Det_score_miaotao:0.418903
Det_score_wany:0.821636

Reminder on the rules: Do NOT use CelebDF-v2 test set in your training or development, and do NOT use extra data other than CelebDF-v2 train set during training of detection phases. Do NOT use filenames, metadata, or facial ID information for detection. Follow the rules listed in “Terms and Conditions”.
It is OK to create new data using data resources that is ONLY from CelebDF-v2 train set to augment your training data. In this case, you also need to provide this augment train set to the organizers to check reproducibility if you ranked top places.

Have fun!
DFGC Organizers

Posted by: bob_peng @ April 12, 2021, 4:17 a.m.

Hello, I have one question about "randomness ".
In fact, my training code, both of sampling strategy and the special data augmentation have some randomness. But as the number of training iterations increases, my LB score is exactly reproducible. Also my submitted inference code do not have randomness.
In my method, the adversarial samples are also randomly selected and generated, which makes it almost impossible for me to continue playing in this challenge. But I think my LB score is exactly reproducible.
So is that against the rules?

Posted by: chenhan. @ April 12, 2021, 6:29 a.m.

Hi, Thanks for the question.
Is it possible to fix the random seed in your training process and get exactly the same results in each run?

Posted by: bob_peng @ April 12, 2021, 6:59 a.m.

For training, if fixing the random seed does not solve the whole randomness problem, we will run the code multiple times to see if the LB score is within the variation zone.
Make sure the submitted inference code should get the same results in each run.

Posted by: bob_peng @ April 12, 2021, 8:52 a.m.

Thanks for your answer!

Posted by: chenhan. @ April 12, 2021, 9:17 a.m.
Post in this thread