DeeperForensics Challenge 2020 @ ECCV SenseHuman Workshop

Organized by deeperforensics - Current server time: March 30, 2025, 8:15 p.m. UTC
Reward $15,000

First phase

Development
Aug. 28, 2020, 11:59 p.m. UTC

End

Competition Ends
Oct. 31, 2020, 11:59 p.m. UTC

Overview

Face swapping has become an emerging topic in computer vision and graphics. Indeed, many works on automatic face swapping have been proposed in recent years. These efforts have circumvented the cumbersome and tedious manual face editing processes, hence expediting the advancement in face editing. At the same time, such enabling technology has sparked legitimate concerns, particularly on its potential for being misused and abused. The popularization of "Deepfakes" on the internet has further set off alarm bells among the general public and authorities, in view of the conceivable perilous implications. Accordingly, there is a dire need for countermeasures to be in place promptly, particularly innovations that can effectively detect videos that have been manipulated.

The DeeperForensics Challenge aims at soliciting new ideas to advance the state of the art in real-world face forgery detection. The challenge uses the DeeperForensics-1.0 dataset, which is a new large-scale face forgery detection dataset proposed in CVPR 2020. DeeperForensics-1.0 represents the largest publicly available real-world face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames. Extensive real-world perturbations are applied to obtain a more challenging benchmark of larger scale and higher diversity. All source videos in DeeperForensics-1.0 are carefully collected, and fake videos are generated by a newly proposed end-to-end face swapping framework.

The dataset also features a hidden test set, which suggests a new face forgery detection setting that better simulates real-world scenarios:

  • Multiple sources. Fake videos in-the-wild should be manipulated by different unknown methods.
  • High quality. Threatening fake videos should have high quality to fool human eyes.
  • Diverse distortions. Different perturbations should be taken into consideration.

Thus, the hidden test set is richer in distribution than the publicly available DeeperForensics-1.0. Besides, the hidden test set will be updated constantly to get future versions along with the development of Deepfakes technology. The evaluation of the DeeperForensics Challenge is performed on the current version of hidden test set. Users are required to submit final prediction files, which we shall proceed to evaluate. 

 

Data Download

To access the DeeperForensics-1.0 dataset and the useful perturbation code, please visit its GitHub repository. You can also find the detailed data description and usage in the dataset download and document page. Before using the DeeperForensics-1.0 dataset for face forgery detection model training, please read these important tips first.

 

Submission

Please register first for submission. The submission platform and guideline for DeeperForensics Challenge 2020 have been released. Please carefully read the important hints and request for resource first. We will check your request and allocate the resource to you.

 

Fact Sheet Requirements

To be qualified to compete for awards, please submit a fact sheet to introduce your team information, method and the used data after the final test phase. For each participating team, please use this fact sheet template. The deadline is Nov. 14, 2020, 11:59 p.m. UTC+0. After the final results are decided, the main winners will be invited to submit a detailed technical report (deadline to be determined).

Please send the compiled pdf file to deeperforensics@gmail.com for the fact sheet submission. Please name your pdf file as "{Your CodaLab User Name}_FactSheet.pdf". Make sure that {Your CodaLab User Name} is the one shown in the "User" column of the leaderboard. If you want to update your submission, please send it through the same thread (email). We will use the newest one as the final submission after the deadline.

 

Prizes

DeeperForensics Challenge 2020 will provide prizes with a total of $15,000 in the AWS promotional code format:

  • 1st Place - $7,500
  • 2nd Place - $5,000
  • 3rd Place - $2,500

 

General Rules

Please check the terms and conditions for further rules and details.

 

Technical Report

The technical report of DeeperForensics Challenge 2020 is released. Citation BibTeX:

@article{jiang2021dfc20,
  title={{DeeperForensics Challenge 2020} on Real-World Face Forgery Detection: Methods and Results},
  author={Jiang, Liming and Guo, Zhengkui and Wu, Wayne and Liu, Zhaoyang and Liu, Ziwei and Loy, Chen Change and Yang, Shuo and Xiong, Yuanjun and Xia, Wei and Chen, Baoying and Zhuang, Peiyu and Li, Sili and Chen, Shen and Yao, Taiping and Ding, Shouhong and Li, Jilin and Huang, Feiyue and Cao, Liujuan and Ji, Rongrong and Lu, Changlei and Tan, Ganchao},
  journal={arXiv preprint},
  volume={arXiv:2102.09471},
  year={2021}
}

 

Issues and Contact

If you have any questions, please feel free to discuss in the Forum. Besides, you can contact us by sending an email to deeperforensics@gmail.com.

Evaluation Criteria

This page describes how the competition submissions will be evaluated and scored.

We use the Binary Cross-Entropy (BCE) loss to evaluate the performance of face forgery detection models:

where:

N is the number of videos in the hidden test set.

yi is the ground truth label of video i (fake: 1, real: 0).

p(yi) is the predicted probability that video i is fake.

A smaller BCELoss score is better, which directly contributes to a higher ranking. If the BCELoss score is the same, the one with less execution time (in seconds) will achieve a higher ranking. To avoid an infinite BCELoss that is both too confident and wrong, the score is bounded by a threshold value.

Terms and Conditions

General Rules

The DeeperForensics Challenge 2020 will be around two months long (nine weeks), where eight weeks for the development phase and one week for the final test phase. Please refer to Phases for the detailed time points. The challenge has officially started at the ECCV 2020, The 2nd Workshop on Sensing, Understanding and Synthesizing Humans.

Participants are recommended but not restricted to train their algorithms on the publicly available DeeperForensics-1.0 dataset. The overview page provides the link to download the data.

The hidden test set used for online evaluation is divided into two parts: test-dev and test-final. Test-dev contains around 1,000 videos that represents general circumstances of the hidden test set and is used to maintain a public leaderboard. Test-final contains around 3,000 videos with a similar distribution as test-dev (also including test-dev videos), which is used to evaluate final submissions for competition results. The final results will be revealed around Dec. 2020. Participants are expected to develop more robust and generalized methods for face forgery detection in real-world scenarios.

When participating in the competition, please be reminded that:

  • Any and all external data used for the competition must be available to all the participants without additional cost, and the used external data must be specified in the "method description" when uploading results to the submission website.
  • Results in the correct format must be uploaded to the evaluation server. The evaluation page lists detailed information regarding how results will be evaluated.
  • Each entry must be associated with a team and provide its affiliation. Please provide this information when requesting the evaluation resources.
  • The online evaluation results must be submitted through this CodaLab competition site of the DeeperForensics Challenge. The online evaluation chances will be updated on Friday, 11:59 p.m. UTC every week. The participants can conduct 4 online evaluations (each with 2.5 hours of runtime limit) per week in the development phase. A total of 2 online evaluations (each with 7.5 hours of runtime limit) are allowed during the final test phase. Each participant will be assigned 1 Tesla V100 GPU with 16 GB memory capacity. The organizer will offer some hints/codes to ensure a successful submission.
  • Using multiple accounts to increase the number of submissions and private sharing outside teams are strictly prohibited.
  • The organizer reserves the absolute right to disqualify entries which is incomplete or illegible, late entries or entries that violate the rules.
  • The organizer reserves the absolute right to adjust the competition schedule and rules based on situations.
  • The best entry of each team should be public in the leaderboard at all time.
  • To be qualified to compete for awards, the participants should submit a fact sheet to introduce their team information, method and the used data. After the final results are decided, the main winners will be invited to submit a detailed technical report.

Terms of Use: DeeperForensics-1.0 Dataset

Before downloading and using the DeeperForensics-1.0 dataset, please agree to the following terms of use. You, your employer and your affiliations are referred to as "User". The authors and their affiliations, Nanyang Technological University and SenseTime, are referred to as "Producer".

  • The DeeperForensics-1.0 dataset is used for non-commercial/non-profit research purposes only.
  • All the source videos in DeeperForensics-1.0 dataset collected by the Producer are bounded by a formal agreement with all the actors. All the videos in DeeperForensics-1.0 dataset can be used for academic purposes. However, the Producer is NOT responsible for any further use in a defamatory, pornographic or any other unlawful manner, or violation of any applicable regulations or laws.
  • The User takes full responsibility for any consequence caused by his/her use of DeeperForensics-1.0 dataset in any form and shall defend and indemnify the Producer against all claims arising from such uses.
  • The User should NOT distribute, copy, reproduce, disclose, assign, sublicense, embed, host, transfer, sell, trade, or resell any portion of the DeeperForensics-1.0 dataset to any third party for any purpose.
  • The User can provide his/her research associates and colleagues with access to DeeperForensics-1.0 dataset (the download link or the dataset itself) provided that he/she agrees to be bound by these terms of use and guarantees that his/her research associates and colleagues agree to be bound by these terms of use.
  • The User should NOT remove or alter any copyright, trademark, or other proprietary notices appearing on or in copies of the DeeperForensics-1.0 dataset.
  • This agreement is effective for any potential User of the DeeperForensics-1.0 dataset upon the date that the User first accesses the DeeperForensics-1.0 dataset in any form.
  • The Producer reserves the right to terminate the User's access to the DeeperForensics-1.0 dataset at any time.
  • For using DeeperForensics-1.0 dataset, please cite the following paper:
    @inproceedings{jiang2020deeperforensics1,
      title={{DeeperForensics-1.0}: A Large-Scale Dataset for Real-World Face Forgery Detection},
      author={Jiang, Liming and Li, Ren and Wu, Wayne and Qian, Chen and Loy, Chen Change},
      booktitle={CVPR},
      year={2020}
    }
    

The download link will be sent to you once your request is approved.

Software

Copyright © 2020, DeeperForensics Consortium. All rights reserved. Redistribution and use software in source and binary form, with or without modification, are permitted provided that the following conditions are met:

  • Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
  • Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
  • Neither the name of the DeeperForensics Consortium nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE AND ANNOTATIONS ARE PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Development

Start: Aug. 28, 2020, 11:59 p.m.

Description: In the development phase, you can make up to 100 successful submissions, while the online evaluation chances will be updated at the stipulated time per week. Failed submissions on CodaLab (this website) caused by any unexpected issue can be re-submitted. Please do not intentionally do this. Please enter your method description and the used data.

Final test

Start: Oct. 24, 2020, 11:59 p.m.

Description: In the final test phase, a total of 2 successful submissions are allowed. Failed submissions on CodaLab (this website) caused by any unexpected issue can be re-submitted. Please do not intentionally do this. Please enter your method description and the used data. The results will be revealed after the final check.

Competition Ends

Oct. 31, 2020, 11:59 p.m.

You must be logged in to participate in competitions.

Sign In