Face swapping has become an emerging topic in computer vision and graphics. Indeed, many works on automatic face swapping have been proposed in recent years. These efforts have circumvented the cumbersome and tedious manual face editing processes, hence expediting the advancement in face editing. At the same time, such enabling technology has sparked legitimate concerns, particularly on its potential for being misused and abused. The popularization of "Deepfakes" on the internet has further set off alarm bells among the general public and authorities, in view of the conceivable perilous implications. Accordingly, there is a dire need for countermeasures to be in place promptly, particularly innovations that can effectively detect videos that have been manipulated.
The DeeperForensics Challenge aims at soliciting new ideas to advance the state of the art in real-world face forgery detection. The challenge uses the DeeperForensics-1.0 dataset, which is a new large-scale face forgery detection dataset proposed in CVPR 2020. DeeperForensics-1.0 represents the largest publicly available real-world face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames. Extensive real-world perturbations are applied to obtain a more challenging benchmark of larger scale and higher diversity. All source videos in DeeperForensics-1.0 are carefully collected, and fake videos are generated by a newly proposed end-to-end face swapping framework.
The dataset also features a hidden test set, which suggests a new face forgery detection setting that better simulates real-world scenarios:
Thus, the hidden test set is richer in distribution than the publicly available DeeperForensics-1.0. Besides, the hidden test set will be updated constantly to get future versions along with the development of Deepfakes technology. The evaluation of the DeeperForensics Challenge is performed on the current version of hidden test set. Users are required to submit final prediction files, which we shall proceed to evaluate.
To access the DeeperForensics-1.0 dataset and the useful perturbation code, please visit its GitHub repository. You can also find the detailed data description and usage in the dataset download and document page. Before using the DeeperForensics-1.0 dataset for face forgery detection model training, please read these important tips first.
Please register first for submission. The submission platform and guideline for DeeperForensics Challenge 2020 have been released. Please carefully read the important hints and request for resource first. We will check your request and allocate the resource to you.
To be qualified to compete for awards, please submit a fact sheet to introduce your team information, method and the used data after the final test phase. For each participating team, please use this fact sheet template. The deadline is Nov. 14, 2020, 11:59 p.m. UTC+0. After the final results are decided, the main winners will be invited to submit a detailed technical report (deadline to be determined).
Please send the compiled pdf file to deeperforensics@gmail.com for the fact sheet submission. Please name your pdf file as "{Your CodaLab User Name}_FactSheet.pdf". Make sure that {Your CodaLab User Name} is the one shown in the "User" column of the leaderboard. If you want to update your submission, please send it through the same thread (email). We will use the newest one as the final submission after the deadline.
DeeperForensics Challenge 2020 will provide prizes with a total of $15,000 in the AWS promotional code format:
Please check the terms and conditions for further rules and details.
The technical report of DeeperForensics Challenge 2020 is released. Citation BibTeX:
@article{jiang2021dfc20, title={{DeeperForensics Challenge 2020} on Real-World Face Forgery Detection: Methods and Results}, author={Jiang, Liming and Guo, Zhengkui and Wu, Wayne and Liu, Zhaoyang and Liu, Ziwei and Loy, Chen Change and Yang, Shuo and Xiong, Yuanjun and Xia, Wei and Chen, Baoying and Zhuang, Peiyu and Li, Sili and Chen, Shen and Yao, Taiping and Ding, Shouhong and Li, Jilin and Huang, Feiyue and Cao, Liujuan and Ji, Rongrong and Lu, Changlei and Tan, Ganchao}, journal={arXiv preprint}, volume={arXiv:2102.09471}, year={2021} }
If you have any questions, please feel free to discuss in the Forum. Besides, you can contact us by sending an email to deeperforensics@gmail.com.
This page describes how the competition submissions will be evaluated and scored.
We use the Binary Cross-Entropy (BCE) loss to evaluate the performance of face forgery detection models:
where:
N is the number of videos in the hidden test set.
yi is the ground truth label of video i (fake: 1, real: 0).
p(yi) is the predicted probability that video i is fake.
A smaller BCELoss score is better, which directly contributes to a higher ranking. If the BCELoss score is the same, the one with less execution time (in seconds) will achieve a higher ranking. To avoid an infinite BCELoss that is both too confident and wrong, the score is bounded by a threshold value.
The DeeperForensics Challenge 2020 will be around two months long (nine weeks), where eight weeks for the development phase and one week for the final test phase. Please refer to Phases for the detailed time points. The challenge has officially started at the ECCV 2020, The 2nd Workshop on Sensing, Understanding and Synthesizing Humans.
Participants are recommended but not restricted to train their algorithms on the publicly available DeeperForensics-1.0 dataset. The overview page provides the link to download the data.
The hidden test set used for online evaluation is divided into two parts: test-dev and test-final. Test-dev contains around 1,000 videos that represents general circumstances of the hidden test set and is used to maintain a public leaderboard. Test-final contains around 3,000 videos with a similar distribution as test-dev (also including test-dev videos), which is used to evaluate final submissions for competition results. The final results will be revealed around Dec. 2020. Participants are expected to develop more robust and generalized methods for face forgery detection in real-world scenarios.
When participating in the competition, please be reminded that:
Before downloading and using the DeeperForensics-1.0 dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The authors and their affiliations, Nanyang Technological University and SenseTime, are referred to as "Producer".
@inproceedings{jiang2020deeperforensics1, title={{DeeperForensics-1.0}: A Large-Scale Dataset for Real-World Face Forgery Detection}, author={Jiang, Liming and Li, Ren and Wu, Wayne and Qian, Chen and Loy, Chen Change}, booktitle={CVPR}, year={2020} }
The download link will be sent to you once your request is approved.
Copyright © 2020, DeeperForensics Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Start: Aug. 28, 2020, 11:59 p.m.
Description: In the development phase, you can make up to 100 successful submissions, while the online evaluation chances will be updated at the stipulated time per week. Failed submissions on CodaLab (this website) caused by any unexpected issue can be re-submitted. Please do not intentionally do this. Please enter your method description and the used data.
Start: Oct. 24, 2020, 11:59 p.m.
Description: In the final test phase, a total of 2 successful submissions are allowed. Failed submissions on CodaLab (this website) caused by any unexpected issue can be re-submitted. Please do not intentionally do this. Please enter your method description and the used data. The results will be revealed after the final check.
Oct. 31, 2020, 11:59 p.m.
You must be logged in to participate in competitions.
Sign In