The fifth phase has begun. Note this is the final creation phase! The valid submission slot of this phase is from now to Apr. 10, 24:00. You need to submit at least one time to this phase, in order to be considered for winning prizes in the creation track. Note the final score will be determined after the final detection phase (the sixth phase) is over, using the detection models submitted in the final detection phase. Your scores on the leaderboard of this phase are evaluated against detection models in the previous detection phase (the fourth phase), so they are not final. Please choose only one submission to be shown on the leaderboard (LB), and this submission on LB when this phase ends is your final entry to be evaluated for awards. Also, please team up if you work as a team.
The submissions in this phase are evaluated against 23 detection methods from the last phase. Out of the 27 submissions from last phase LB, 3 methods showed randomness when evaluated on the same input, and one team submitted two submissions to LB (so one lower result is not used), so we abandoned the 4 detection methods. Please note again, that if any randomness is found in your detection method, it will be disqualified. And each team only submit one submission to the LB.
You can now submit your Deepfake creation results to the competition website. When you click the “Submit” button, you may need to wait for seconds or minutes before the submission is fully uploaded, depending on your network speed and submission size. After submission, the status “submitted” indicate that your submission is completed and waiting for evaluation, and the status “running” indicates that your submission is being evaluated on our server. If everything goes fine, your best result will be shown on the Leaderboard. You may also choose a different submission to be submitted to the LB. Note only one submission from each team should be submitted to LB, and the submission on the LB when the phase is over is treated as your final submission. Also note that the “detailed results” in the LB is not working for now due to some bug of Codalab platform. However, you can still view your own detailed results by “Download output from scoring step” in your submission transaction and see the “scores.txt”.
Please follow our requirements for submission file structure detailed in the starterkit.
Note that beginning from the third phase, we added a “Noise_score” term in the evaluation metric of creation results. It penalizes noisy deepfake images. This is to encourage submissions of results using new deepfake methods, and we put more weight on the image quality. Participants are encouraged consider more diverse new deepfake methods instead of simply post-processing existing images.